How a small detail can ruin your whole customer experience

Mac Donald's automated order

Having young kids, I sometimes take them to Mac Donald’s or Quick to reward for their good marks. I’d prefer to eat at a good French restaurant, but they prefer Mac Donald’s. All their little friends frequently go there, so from my kids point of view, eating at Mac Donald’s looks like a social act: by doing so, they belong to their friends group.

For a few years now, Mac Donald’s and Quick have deployed automated order spots. They may sometimes be missing the latest product or special offer, but it’s OK since you don’t have to take the line and face a grumpy employee anymore (BTW, Quick employees are usually much more friendly than Mac Donald’s, at least in Paris)

If both places offer automated orders, they also provide a radically different customer experience, and Quick’s much pleasant than Mac Donald’s.

At Quick, you can browse the menu, fill in your cart, change your mind, checkout, and then you’re asked to insert your credit card.

At Mac Donald’s, you can’t even browse the menu without inserting your credit card. It seems most people don’t care about that, but from my point of view, it’s a terrible experience. They also come with up to 3 or 4 very annoying, hard to close splash screens trying to sell me things I don’t want to hear about. I don’t think I use automated orders to get more spam than with your regular seller.

The same experience applies with some ATMs. Some of them won’t let you do anything before you enter your code, when the others let you decide what you want, then ask for your code.

The client experience is radically different here. Typing your code at the last moment makes me feel much more secure. I know no one can appear behind me and press a random button, forcing me to withdraw more cash than I wanted just « for the fun », as it often happens in Paris touristic places.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

Feedback on 1 year of partial remote working

Homework on the beach

It’s been one year since I started a partial remote working routing. I already had some experience with working remotely, but doing it every week needs a completely different setup as it needs to become a habit.

At Botify, people who work remotely do it on Tuesday and Thursday. I believe it to be the best setup since we meet each other every two days. Partial remote working gets lots of management issues out of the way, the biggest one being your colleagues slowly acting as contractors and not feeling like team members anymore.

That said, I know lots of companies work perfectly with full remote working, but at the price of more constraining habits and routines, and some self discipline everybody is no capable of.

Looking for the perfect setup

Working remotely does not necessarily mean working from home. It just means working… remotely. This year, I’ve been working from home, from the hospital to stay with one of my kids, from my holiday house by the sea, from many Starbuck’s and even from a tennis club club house as my son was playing his championships. I never watch him playing, it’s more stressful than recovering a backup less, replication less MySQL corruption.

But I’m generally working from home. The WIFI is much better and I’ve setup a few things to ensure it goes smoothly.

First, you need a room to work from. Forget about working in your lounge from the couch, with a tennis game playing on your flat screen, it only works a few days a year, and you end with back issues.

If you can dedicate a room as your office, it’s great. I can’t so I’m working from my bedroom. It has a large desk, convenient light, and it’s at the end of a closed corridor so it’s silent enough to keep me focus when the kids are home. I’ve bought a nice chair and a 27 inches secondary flat screen. The secondary screen was a little bit expensive, but it’s really worth the comfort it brings.

Speaking about the kids – and my wife – we’ve established two simple but critical rules:

  • When I’m working, it’s like I’m in my actual office so I can’t take care of the kids. Not even a minute.
  • If I leave the corridor open, people can come and go if they need something in the bedroom, just like in an open space. If I need to focus, I close the door.

When it’s sunny and warm enough, I move to my terrace, where the same rules apply.

Working with these rules have been working quite well so far. I sometimes need to remember my wife that I’m working, not at home, but not too often.

Time management

My time management is different from the days I work at our office. Most of the times, my Tuesday and Thursday are exactly the same from one week to another, so they really become a routine.

My work day starts about 7:30 AM instead of 09:30 when I’m at our office with email reading and the most urgent (usually client related) tasks, logs reading, temporary files cleaning etc. Unless things really got wrong during the night, it doesn’t take past 08:30 AM. Then, I start my first 1 hour run.

I’ve already wrote about how I’m using Pomodoro (in French, sorry). When I’m working from home, I’m trying to do 1 hour runs instead of 25 minutes. This is made possible by the lack of disturbance, and the fact that we both use asynchronous communication tools and work on long runs.

At 10, we do a quick Hangout meeting to tell each other what we’re working on, and what difficulties we’re meeting. It’s less than 10 minutes, but for me it’s the most important time of the day since it’s the time we remember we’re a team.

Then, I try to do 2 other runs until lunch break. As my morning is quite long, I take long lunch breaks. I play tennis every Tuesday and every Thursday, I offer myself a long nap or walk in the park. I take those break very seriously, as they allow me to regenerate my concentration pool.

I offer myself a second, 15 minutes break about 4:00PM, when my elder son gets home from school. We have a 15 minutes snack break, and I get sure he starts his homework before going to his tennis training. This break is very important to me, as partial remote working allows me to give my family much more time since I save 2:30 in commuting each day. Then, I’m back for 2 other runs.

I usually stop around 6:30PM, sometimes 7:00PM to spend some time with the kids until they go to bed. Then, if needed, I’m back to work around 08:30 or 9:00PM for a last hour.

Communicating with the team

The biggest problem with working remotely is communicating. I’m happy we’re all working from France so we don’t need to manage timezones.

Working remotely means you’re not 100% dedicated to reacting to your colleagues sollicitation. This means using asynchronous communication tools.

At Botify, we’re using Slack, an enterprise chat solution. It offers a great integration with many services we’re using (starting with Github), and a fairly decent IRC gateway for old school people like me.

As I want to focus without getting disturbed, I’ve deactivated every Slack notification so I only get the information when I need it. On the other hand, as my IRC window is in a terminal, which is also my main work area, I can see at a glance if people have been mentioning me and decide if I can afford having a look or not. If something really gets wrong, they can just give me a phone call.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

You can’t always please everyone (and it’s OK)

Being popular is nice

A few times ago, I was asked what advice I would give to my cousin who’s starting her first job. I had lots of them, as I love giving advices, but there’s one lesson I wish I had learnt from the beginning: it’s OK not to please everyone.

When you start a new job, get married or join a club, trying to please everyone is a natural reflex. You don’t know anyone, and creating social links is critical if you want to belong to the group. And that’s OK too.

But trying to please everyone ends being counter productive and leads to schizophrenia.

Managing a platform is a key role at a SAAS player. If the platform gets slow or unavailable, your entire company is screwed. Clients leave, new clients don’t come and your entire reputation is ruined.

It means you have to take lots of unpopular and frustrating decisions. You’ll have to refuse to deploy the latest hype because it’s not production ready, refuse to give people accesses they want – but don’t really need – or push new code on Friday, because that new feature can wait for Monday. You may even have to give your keyboard to your boss and tell him « if you really want to get it done, do it yourself ».

And that’s OK too.

The higher you get, the more responsibilities you have, and the more you have to make unpopular choices. That’s part of the job.

You can still try to please everyone, but then you’ll end taking poor decisions. Poor decisions always backfire, and it takes lots of work and energy to fix them.

Saying no is not a crime. And it doesn’t mean you need to behave like ajerk and be hated by every people around you.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

I’m tweeting crap (and I’m OK with it)

Twitter nightmware

Last year, I had what seemed an awesome idea: I downloaded my whole Twitter history and imported it in my blog database as short notes. I was deep in the Indie Web mood, and I did not want all the short messages I’ve published online to disappear when Twitter decides about it.

It was not an awesome idea. It was a terrible one.

I started to read my early Twitter life, those messages I posted from the beach in a pre iPhone era using my antique Nokia 6230. I was trying to remember when or why I said those things, what I was doing then, who I was meeting and what was my general mood towards the world.

2007, leaving a job I hated… 2008, announcing my second kid’s birth on Twitter before I even called my mom, 2009… 2010… pushing drunk or depressed messages all over the net, private jokes with long lost « friends » all mixed with links to more or less interesting but highly RTable contents… They didn’t fit here.

They didn’t fit it because I’ve always tried to write high quality content here, up to force myself to a severe self censorship doubled with a severe impostor syndrome I’ve learnt to get rid of.

The truth it I’ve been tweeting crap since I joined the network, and I don’t really care about it.

I started to think about it a lot. The wisest path would be to as myself « is that tweet worth pushing? », stop posting rants, poor quality links and polish my online self. I should respect my followers by only providing them with high quality tweets.

After all, last week a marketing intern told me I was influential, which means I should take care about that shouldn’t I? Bullshit, I’ll be influential the day I tell my followers to dress in pink and send me 10€ each month and they do it. I’m neither a brand nor a public person, and I can afford pushing stupid links and crappy jokes as much as I can when every account are playing the follower race all looking the same.

So I started to wonder why I don’t care more about what I post, and as a consequence, why I don’t care about losing my 28,778 tweets.

The reason is: the stream nature of Twitter.

When you look at a blog, you’ll find a structure, usually defined by the URL. Many blogs structure posts by year, month and day, using /yyyy/mm/dd/something, categories, tags, author. Even that one with the first level permalinks has a – quite flat – structure. You can browse it by tag if you want, because tags structure the way I want content to be found.

Twitter, Google+, Facebook… they have no structure. They’re flat streams of data you can search. Indeed, they’re link to their owner, but the content is still stream.

There’s nothing new here, but it explains a lot.

Thinking about it, I don’t see a fundamental difference between Twitter and Snapchat. Snapchat makes the content disappear as soon as it’s read. Twitter doesn’t, but the content gets lost in the stream of information, and I have the strong believe it’s not meant to last.

This is the very reason why I added notes on Publify. Notes are title less, tag less blog posts you can directly push on Twitter with a link to the Twitter message, following the POSSE philosophy. Notes are not longer tweets, even though they’re often. Notes are tweets that are meant to last, and survive the service hype.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

The Personal Blog

RSS was something

Yesterday, as I was mourning the time when blogs and RSS were something, Fred Wilson has published a very heart warming post about personal blogging. He notes how Lockhart Steele et Elizabeth Spiers, both pioneers of the NYC personal and business blogging scenes are back to their own weblog.

I can’t agree more with Fred when he says:

There is something about the personal blog, yourname.com, where you control everything and get to do whatever the hell pleases you. There is something about linking to one of those blogs and then saying something. It’s like having a conversation in public with each other. This is how blogging was in the early days. And this is how blogging is today, if you want it to be.

When I started blogging here at AVC, I would write about everything and anything. Then, slowly but surely, it became all about tech and startups and VC. It is still pretty much that way, but I feel like I’m heading back a bit to the personal blog where I can talk about anything that I care about.

For 15 years, I’ve been considering the personal website, and its heir the blog as the most important thing on the Web. They are the places of both online expressing and being, way deeper than every hosted community whose life come and go with their artificially generated hype.

When these silos decide what their members want to see from their pals for financial purpose, the personal website is the only source of a raw information, close to our human self with its lights and shadows.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

Nginx Optimization: understanding sendfile, tcp_nodelay and tcp_nopush

WWW

This article a is translation by popular request of Optimisations Nginx, bien comprendre sendfile, tcpnodelay et tcpnopush I wrote in French in January.

Most articles dealing with optimizing Nginx performances recommend to use sendfile, tcp_nodelay and tcp_nopush options in the nginx.conf configuration file. Unfortunately, almost none of them tell neither how they impact the Web server nor how they actually work.

Everything started with a after Greg did the peer review of my Nginx configuration. He was challenging my optimization, asking me if I really knew what I was doing. I started to dig into the TCP stack basement, as mixing sendfile, tcp_nodelay and tcp_nopush seemed to be as logical as a pacifist joining the Navy Seals (which have nothing with baby seals).

on tcp_nodelay

How can you force a socket to send the data in its buffer? A solution lies in the TCP_NODELAY option of the TCP (7) stack. Activating TCP_NODELAY forces a socket to send the data in its buffer, whatever the packet size. Nginx option tcp_nodelay adds the TCP_NODELAY options when opening a new socket.

To avoid network congestion, the TCP stack implements a mechanism that waits for the data up to 0.2 seconds so it won’t send a packet that would be too small. This mechanism is ensured by Nagle’s algorithm, and 200ms is the value of the UNIX implementation.

To understand Nagle’s purpose, you need to remember that Internet is not only about sending Web pages and huge files. Imagine yourself back in the 90s, using telnet to connect on a distant machine over a 14400 RTC connection. When you press ctrl+c, you send a one byte message to the telnet server. To that message, you need to add the IP headers (20 bytes for IPv4, 40 bytes for IPv6) and the TCP headers (20 bytes). When pressing ctrl+c, you actually send 61 bytes over the network. Angle ensures you may have something else to type before the data is sent.

That’s cool, but Nagle is not relevant to the modern Internet anymore. It is even counterproductive when you need to stream data over the network. Chances your file fills exactly a bunch of full packets are close to 0, which means Nagle creates a 0.2 seconds latency on the client side for every file it downloads.

The TCP_NODELAY option allows to bypass Naggle, and then send the data as soon as it’s available.

Nginx uses TCP_NODELAY on HTTP keepalive connections. keepalive connections are sockets that stay open for a few times after sending data. keepalive allows to send more data without initiating a new connection and replaying a TCP 3 ways handshake for every HTTP request. This saves both time and sockets as they don’t switch to FIN_WAIT after every data transfer. Connection: Keep-alive is an option in HTTP 1.0 and HTTP 1.1 default behavior.

When downloading a full Web page, TCP_NODELAY can save you up to 0.2 second on every HTTP request, which is nice. When it comes to online gaming or high frequency trading, getting rid of latency is critical even at the price of a relative network saturation.

on tcp_nopush

On Nginx, the configuration option tcp_nopush works as an opposite to tcp_nodelay. Instead of optimizing delays, it optimizes the amount of data sent at once.

To keep everything logical, Nginx tcp_nopush activates the TCP_CORK option in the Linux TCP stack since the TCP_NOPUSH one exists on FreeBSD only.

The well named TCP_CORK blocks the data until the packet reaches the MSS, which equals to the MTU minus the 40 or 60 bytes of the IP header.

Life and death of a TCP_CORK

Everything is well explained in the Linux kernel source code

/* Return false, if packet can be sent now without violation Nagle's rules:
 * 1. It is full sized.
 * 2. Or it contains FIN. (already checked by caller)
 * 3. Or TCP_CORK is not set, and TCP_NODELAY is set.
 * 4. Or TCP_CORK is not set, and all sent packets are ACKed.
 *    With Minshall's modification: all sent small packets are ACKed.
 */

static inline bool tcp_nagle_check(const struct tcp_sock *tp,
const struct sk_buff *skb,
unsigned int mss_now, int nonagle)

  return skb-\>len \< mss_now &&
((nonagle & TCP_NAGLE_CORK) (!nonagle && tp-\>packets_out && tcp_minshall_check(tp)));
}

TCP_CORK needs to be explicitly removed if you want to send half empty (or half full) packets.

TCP(7) manpage explains that TCP_NODELAY and TCP_CORK are mutually exclusive, but they can be combined since Linux 2.5.9.

In Nginx configuration, tcp_nopush must be activated with sendfile, which is exactly where things get interesting.

On sendfile

Nginx initial fame came from its awesomeness at sending static files. This has lots to do with the association of sendfile, tcp_nodelay and tcp_nopush in nginx.conf. The sendfile Nginx option enables to use of sendfile(2) for everything related to… sending file.

sendfile(2) allows to transfer data from a file descriptor to another directly in kernel space. sendfile(2) allows to save lots of resources:

  • sendfile(2) is a syscall, which means execution is done inside the kernel space, hence no costly context switching.
  • sendfile(2) replaces the combination of both read and write.
  • here, sendfile(2) allows zero copy, which means writing directly the kernel buffer from the block device memory through DMA.

Unfortunately, sendfile(2) requires a file descriptor that supports mmap(2) and friends, which excludes UNIX sockets, for example as a way to send data to a local Rails backend without all the network latency.

The in_fd argument must correspond to a file which supports mmap(2)-like operations (i.e., it cannot be a socket).

Depending on your needs, sendfile can be either totally useless or completely essential.

If you’re serving locally stored static files, sendfile is totally essential to speed your Web server. But if you use Nginx as a reverse proxy to serve pages from an application server, you can deactivate it. Unless you start serving micro caching on a tmpfs. I’ve been doing it here, and didn’t even notice the day I was featured on HN homepage, Reddit or good old Slashdot.

Let’s mix everything together

Things get really interesting when you mix senfile, tcp_nodelay and tcp_nopush together. I was wondering why anyone would mix 2 antithetic and mutually exclusive options. The answer lies deep inside a 2005 thread from the (Russian) Nginx mailing list.

Combined to sendfile, tcp_nopush ensures that the packets are full before being sent to the client. This greatly reduces network overhead and speeds the way files are sent. Then, when it reaches the last – probably halt – packet, Nginx removes tcp_nopush. Then, tcp_nodelay forces the socket to send the data, saving up to 0.2 seconds per file.

This behavior is confirmed in a comment from the TCP stack source about TCP_CORK:

When set indicates to always queue non-full frames. Later the user clears this option and we transmit any pending partial frames in the queue. This is meant to be used alongside sendfile() to get properly filled frames when the user (for example) must write out headers with a write() call first and then use sendfile to send out the data parts. TCP_CORK can be set together with TCP_NODELAY and it is stronger than TCP_NODELAY.

Nice isn’t it?

Here we are, I think we’re done. I did not mention writev(2) as an alternative to tcp_nopush on purpose to avoir adding complexity. I hope you enjoyed reading this, don’t mind sending me an email if you have something to add, I’ll publish it with pleasure.

Many thanks to Arthur, Bruno, Bsdsx and Ludovicfor proofreading this article, and to Greg for both his deep knowledge and for kicking my ass until I came back to him with answers to his questions.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

Un web omni-présent

Intervention donnée lors des Rencontres de Lure, avec pour thème CHEMINS DE FAIRE, ACTIVER LA PAGE BLANCHE // Traverse. 1h et un public inconnu, bien éloigné de ma zone de confort…

J’ai emprunté plusieurs chemins de traverse au cours de ma vie. Le premier a été de passer de la biologie à l’informatique et plus particulièrement au web. Puis j’ai assez rapidement décidé de travailler à mon compte pour avoir plus de liberté. Je suis ensuite allé au Japon pendant un an pour explorer une nouvelle culture, d’autres modes de vie et de pensée. Et enfin j’ai co-créé une SCOP de retour en France il y a 2 ans. Chacune de ces expériences a été l’occasion de repartir d’une page blanche. Ou presque. De faire en sorte que mon passé et ma culture soient des acteurs de nouvelles interactions dans de nouveaux domaines.

En découvrant le web, j’ai exploré un monde de relations qui n’était finalement pas si éloigné de la biologie. En découvrant la vie de freelance, j’ai pris conscience des enjeux et des responsabilités qui incombent à un chef d’entreprise, chaque client devenant un petit patron. En découvrant le Japon, j’ai appris à apprécier les singularités de la culture française. En découvrant la collaboration, j’ai été confronté aux difficultés d’une approche démocratique.

Aujourd’hui, on expérimente avec scopyleft l’activation de la page blanche des autres pour arriver ensemble à produire le plus de valeur. On a essayé l’agilité avant de se rendre compte qu’il fallait travailler en amont même des projets en s’inspirant des méthodes du Lean Startup (et notamment du Lean Canvas). La vérification de la pertinence d’une idée peut être obtenue avant même de plonger dans la technique à travers des interviews ou des « produits embryons ».

Je me représente le web comme cet univers en expansion. On en définit mal les contours — on sait qu’il s’agit d’amas d’amas de galaxies — que l’on se représente plus ou moins sphérique. Parmi cette multitude d’étoiles, des planètes se sont formées et certaines se trouvent être à des conditions de pression et de température favorables à l’apparition de rencontres. J’ai l’impression d’être un astéroïde qui a atterri par hasard sur la planète des Rencontres de Lure. Afin que l’on partage un vocabulaire commun, j’ai posé 3 questions pour que l’on puisse échanger durant l’heure qui a suivi :

  • Quels sont ceux d’entre vous qui travaillent dans le web ?
  • Quels sont ceux parmi vous qui codent pour le web (html, css, js) ?
  • Quels sont ceux qui ont un compte Facebook ? Twitter ? Gmail ?

Un web

The problem with a centralized web is that the few points of control attract some unsavory characters. […] It’s not just possible, but fairly common for someone to visit a Google website from a Google device, using Google DNS servers and a Google browser on the way.

The Internet With A Human Face

On appelle souvent le web « la toile » ce qui lui donne une représentation concentrique avec l’araignée généralement au centre. C’est une assez mauvaise image du web originel, malheureusement cette métaphore tend à se rapprocher du web actuel. Nous sommes partis d’un web plus ou moins acentré pour arriver à un web qui ressemble à une télévision sur lequel on zappe entre 6 onglets (Google, Facebook, Twitter, Instagram, Wikipedia, Amazon). Cette position donne à ces monopoles une situation préoccupante à triple titre :

  • Ils peuvent fragmenter le web. Certains contenus, certaines données, ne deviennent accessibles qu’en faisant partie de la plateforme. En publiant sur ces sites, vous êtes acteurs de cette fragmentation sous couvert d’élitisme/snobisme.
  • Ils peuvent filtrer le web. Les algorithmes mis en place pour vous afficher les contenus de manière pertinente sont des œillères dangereuses. En ne consultant que ces sources d’information vous devenez prisonniers de bulles de complaisance bien lisses.
  • Ils peuvent monétiser le web. À partir de vos données, de vos relations, de vos interactions, de vos simples explorations. Votre profil prend de la valeur si vous êtes malade, si vous êtes dépensier, si vous tombez enceinte !

Les amas de galaxies dont je parlais en introduction s’agrègent et perdent de leur hétérogénéité. Comment évoluera un réseau en pair à pair avec de telles inégalités entre les pairs ?

On assiste également à une app-ification du web qui sous couvert de simplicité transforme vos interactions à travers le réseau en passant par des boîtes noires qui n’ont plus ni la simplicité des technologies web, ni la lisibilité de leur code.

La diversité sur le web se réduit à tel point qu’une page personnelle vous fait aujourd’hui passer pour un marginal. Voire un suspect ?

Omni

Le coût de la surveillance est beaucoup trop bas.

Lettre aux barbus, Laurent Chemla

On parle beaucoup d’Internet of Things, de Quantified Self ou d’OpenData avec l’idée derrière tout cela que beaucoup de données (Big Data — BINGO!) vont transiter entre nous, nos objets et notre environnement au sens large pour enrichir des hipsters de la silicon valley nous simplifier la vie.

Malheureusement ce dont on s’est aperçu avec Snowden et depuis, c’est que ces données servent surtout à nous tracer à grande échelle. Cette surveillance généralisée est préoccupante pour 3 raisons :

  • Perte de confiance dans le politique. C’était déjà pas la joie mais alors là c’est à vous faire douter de votre intérêt pour la citoyenneté. Les acteurs en puissance ont tout à gagner à ce qu’on les laisse s’amuser entre eux. Mais ce n’est plus de la démocratie…
  • Sentiment d’insécurité et lissage de l’opinion. Si chaque citoyen devient suspect, il faut se fondre dans la masse. Pour tromper les algorithmes, pour tromper les (futurs) drones, pour finir par se tromper soi-même. Et lorsqu’on s’est suffisamment conformé au moule on tape sur la minorité voisine pour évacuer son stress et se sentir vivant. Ou on retweete une cause vraiment juste… mais passagère aussi.
  • Renoncement à une vie privée numérique. Puisque plus rien ne marche, autant vivre avec et arrêter d’essayer de se battre contre des moulins. De toute façon ceux qui ont peur doivent bien avoir quelque chose à cacher ? Ou peut-être que l’on a envie d’un web intime, d’un web qui autorise les erreurs, d’un web qui dénonce les injustices ?

Devant cette surveillance généralisée, pour vivre heureux vivons submergés ?

Présent

Seven generation sustainability is an ecological concept that urges the current generation of humans to live sustainably and work for the benefit of the seventh generation into the future.

Great Law of the Iroquois

Internet n’oublie jamais. On a tous entendu cet adage qui est pourtant relativement faux. Des pages, des photos, des données disparaissent tous les jours. Lorsqu’un service ferme ce sont des milliers, voire des millions de comptes qui sont perdus. J’ai d’ailleurs appelé cela un datacide lorsque l’on assiste à un génocide de données. Cela peut avoir des effets bénéfiques et l’on pense bien évidemment au droit à l’oubli mais le problème est qu’Internet n’agit pas comme une souvenance — la façon dont on se souvient de ce que l’on a vécu — mais comme un journal de bord à moitié effacé. On ne choisit pas ce qui est conservé, on le subit.

Face à cette culpabilité numérique on en vient à une sorte d’exhibitionnisme numérique : plus je publie et moins les choses que je souhaite cacher seront visibles. On obtient des flux sans réflexion, sans espoir d’archivage, sans aucun contrôle. Le lâcher-prise sur ses interactions en ligne est symptomatique d’une inconscience généralisée de l’usage qui peut en être fait.

Ouf ! On a survécu à l’introduction un peu déprimante (j’ai réussi à plomber l’ambiance de typographes — huhu). Si l’on analyse chacun des points de ce web omni-présent, on constate qu’il y a principalement un problème de confort. Le web se fragmente car on ne prend pas la peine d’avoir son propre serveur, se surveille massivement car on est paresseux sur le chiffrement et disparait car l’on n’a pas envie de se soucier de ses traces numériques. Quelles pistes non techniques pour un web plus sain ?

Pistes

Militer

Le militantisme peut avoir un impact s’il est pratiqué à large échelle. La force du web est de pouvoir transmettre et propager des informations très rapidement. Il faut se servir de cet outil à bon escient !

Déconnecter

Je vais prendre mon exemple : je n’ai pas de compte Facebook, j’ai fait plusieurs diètes de tweets, je n’ai plus de smartphone. C’est certainement extrême mais je n’en suis pas mort numériquement pour autant. Je me porte même plutôt mieux depuis. S’interroger sur ses usages permet de prendre conscience de ce qui a vraiment de la valeur.

Innover localement

Je fonde beaucoup d’espoirs dans les initiatives locales. De nombreux projets sont en gestation et se développent autour de petites communautés de façon décentralisée. Une façon de s’adapter à la culture locale, de recréer une sorte d’intimité numérique.

Éduquer

Cette dynamique d’ouverture ne se fera pas sans éducation. Pas seulement auprès des enfants, on n’a malheureusement pas le luxe d’attendre que les nouvelles générations représentent la majorité. Il faudrait une éducation citoyenne de masse, 100 personnes aujourd’hui qui transmettront demain à 1000 autres ? ;-)

Se réapproprier

En utilisant des outils conviviaux tels que les défini Ivan Illich :

  • ne doit pas dégrader l’autonomie personnelle en se rendant indispensable
  • ne suscite ni esclave, ni maître
  • élargit le rayon d’action personnel

Il est temps de se réapproprier ses savoirs pour être à même de réacquérir son autonomie et en offrir à d’autres.

La concentration de galaxies est à l’origine d’une augmentation de la température qui se termine généralement en trous noirs. Quels autres leviers avons-nous pour éviter que le web ne soit aspiré par ces trous noirs ? J’ai démarré le discussion avec cette citation :

Il faut choisir, se reposer ou être libre.

Thucydide, ~2400 av. Facebook

Discussion

Questions techniques

Beaucoup de discussions sur la faisabilité technique d’une telle surveillance. Si l’on fait un premier point sur l’affaire Snowden, le constat est on ne peut plus limpide. C’est même pire après tout ce qui a été découvert depuis…

Questions sur la peur

On m’a demandé de quoi est-ce que j’avais peur, ressortant le fameux Nothing to hide, nothing to fear. Je n’ai pas peur, je m’interroge sur un constat et sur ma participation indirecte à la situation actuelle en étant acteur de ce système. J’explore des solutions et je vais en chercher dans des lieux comme les rencontres de Lure pour y retrouver une certaine naïveté technique et une expérience vieille de quelques millénaires.

Solutions techniques

Il m’a quand même été demandé de donner quelques solutions techniques. Voici des propositions :

Ces 4 points sont très basiques, vous pouvez ensuite vous pencher sur des solutions comme les réseaux privés virtuels (VPN) ou Tor pour aller plus loin.

Le web est une invention précieuse, préservons son graphe : ses liens et ses données.

Should I use Ansible or Puppet? (short answer: both)

My new Puppet

Lately, we had a debate about whether or not we should use Ansible or Puppet. We were not discussing which one is the best, but which one suits our needs more.

The answer came: we should use both.

At Botify, the infrastructure relies on 2 core principles: immutability and blue / green deployment.

Immutability means that once you’ve built something, you never change it. For every deployment, we built images of our virtual machines from scratch, then deploy them. If something gets wrong on a machine, we trash it and replace it with a new one, launched from the same image.

Immutability means longer builds, but it also means more consistency, no upgrade conflicts, no forgotten virtual machine still running the old code. In a word, immutability means no alarms and no surprises.

Blue / green deployment means that for every deployment, we build the new infrastructure (the green one), then switch from blue to green. Blue / green deployment is very powerful because it allows to rollback if something gets wrong.

To ensure a perfect deployment, we need to ensure that a build goes well from start to end, and that the build virtual machines reach the desired state before making the image.

That’s where you realize choosing between Puppet and Ansible gets tricky.

Puppet works in 2 mode. A daemon or crontab launched script can query a server, the Puppet Master, for new updates, or you can run Puppet standalone and call the modules you need. Since we’re building immutable machines, using Puppet master is useless as we never update the machines state.

Puppet is literally a state machine. It tries to reach the most complete possible state, even though it fails here ans there. To achieve this, it orders the tasks the best it can, even though it may, at some point, reach an inconsistent state. To avoid this, its DSL provides a dependency system that works quite well.

This is both a great and a real problem. Puppet won’t stop if something fails. It will just skip everything depending on what just failed and goes further. When you start automatizing your builds, this is critical as there are no ways to check what went wrong, and you can easily build inconsistent, buggy virtual machines.

To avoid this, you either keep an eye on your build, hoping you won’t miss a single error, or you rely on things like Server Spec. Server Spec looks like Ruby Rspec: it provides a humanly readable language to test your server state.

Unfortunately, that sucks. Really. First, you write a complete description of the state your machine should achieve with Puppet, then you write another complete description of the state if should have achieved thanks to Puppet. There’s something wrong here: you can’t rely on Puppet to achieve the state you want since it won’t stop when something goes wrong.

Then, you have Ansible.

Ansible is very different from Puppet. Ansible is not a state machine, as it has no state notion. Ansible runs a sequential series of tasks, and stop when it fails. This is great in many ways, as you know for sure when something got wrong. There is no need of a dependency system, since Ansible just runs the tasks one after the other: install a package, push a file…

Instead of a master / slave architecture, Ansible runs with a concept of inventory: machines belong to groups, groups depend on roles, and the roles include one or many tasks. Tasks are ran sequentially on every host of the inventory. That part is awesome when you maintain a bunch of machines, but is totally useless when going immutable. If a task fails on a host, Ansible stops processing that host but keeps working on the others, which is exactly what you’d expect too when running immutable.

Edit: Fixed this: useless applied to inventory, not, about the fail bit when going immutable, thank you @laserllama for noticing

Just like Puppet, Ansible has its goods and bad sides. Since you don’t reach a state, using Server Spec to check if you reached the desired state is almost mandatory. But at least, you know when something went wrong.

So, why would you need both of them?

From my experience, 99.5% errors in Puppet comes from package installation. Either a package does not exist in the required version anymore, or Node.js index is down once again, or Pypi timeouts…

Because of that, and because of Puppet main limitation, package installation should not be done by Puppet, but by something else. Puppet is great at managing configuration, when it has everything it needs. Indeed, you can still ensure package XYZ is installed before running the configuration part, but you should not let Puppet install it.

Until now, I’ve mostly been using Ansible for EC2 orchestration. Ansible has a bunch of nice AWS modules (I’ve contributed to some of them) to help building a new platform: start an instance, build an AMI, create a security group, a launch config or an autoscaling group…

I’m more and more thinking to move the whole install part, which is managed by Puppet, into Ansible to ensure that missing consistency. I’d then probably add a Docker layer somewhere to make the new machines build faster as some parts don’t move that often. Booting a new machine would then download the Docker images it needs, limiting even more the risks of errors by rebuilding only small parts of them.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

On finding and knowing your core values

Your core values hiding behind the clouds

Do you know what your core values are? I may be 36, I didn’t think about that until recently. They’re, however, critical when it comes to chose your life path.

During my summer vacation, I read Heath Brothers Decisive, how to make better choices in life and work. That’s exactly the kind of books I avoid because the title is too well marketed, but I don’t regret any single minute I spent on it.

Decisive is about getting better at making choices by understanding the decision psychological process, and improving what can be done.

One of the chapter that really stroke me is about defining your core values. When you need to take a decision that impacts your whole life, you should confront it against what you really are. So the first thing to do is actually understanding what you really are, or what you want to be. Does quitting your job for money makes you a prostitute? Does taking a boring job that fits your life / work balance makes a boring person? The answer is not as easy as it seems.

Defining your core values is a good way to know yourself better. It’s quite complicated too, because your core values can vary as time flies and you gather experience.

I took some time to think about it, and tried to understand what were my core values for every important decision I had to take the past 20 years.

Being young is awesome, because you do most things without asking yourself why. It also sucks because you need to take life impacting decision without any experience or distance: choose a career path, get engaged with someone you barely know but chose to spend your life with, have kids without knowing what it’s about. Thinking about it, I was always moving forward, without taking the time to think.

However, there were a few things that always counted in my decisions, I can now count as one of my then core values.

The most important one were having fun and learning. I couldn’t stand being bored, even though it meant a bright future. I joined a well known French politic sciences school and was bored to death. As a consequence, I failed miserably during the 2 years I spent there but became great at roller and ice skating.

Then, I joined that computer engineer school, and had lots of fun; not everyday, but the topic I was studying was fun enough to keep me focus for 4 years despite having to work to pay my studies, losing my father from cancer, my job, getting married and having an unexpected kid, both the same year. I had to take lots of urgent decisions: find a new job to pay for my school (but neither in the porn nor gambling industries even though they provide some interesting technical challenges) and feed my new coming family, deciding what to do with my girlfriend and the baby and so on… I had to decide with my guts, but every time, my core values remained.

With the years, I don’t think my core values changed that much. Having fun and learning from what I do is still important, but feeding my family and spending more time with them became quite important as well. When I decided to leave blueKiwi 1 year ago, I did not think about it this way, but I’m pretty sure those 4 things weighted a lot in my decision.


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.

The more complicated, the more secure?

The nightmare before DNSSEC

When the guys at IETF thought about securing DNS, they had an awesome idea: DNSSEC. Indeed DNS needed to be hardened: it’s the most used and most vulnerable protocol on the Internet. Well sort of.

So, they gathered and came with this:

To secure DNS, let’s force normal domain owner to enter the wonderful world of asymmetric cryptography so they get their zones signed, and resolvers can check the zone against the signature.

The first part was already not trivial for normal people buying a domain name for their wedding photo album. Hopefully, most shared hosting also sell domain names, and these are pretty easy to configure. But that’s not always the case.

Signing was too trivial, and not secure enough, so they went a little further:

Hey, what if we asked people to resign their zones every 30 day or they won’t resolve at all? And indeed for more security, resigning should be done manually!

And you want that thing to spread fast and worldwide (with zones supporting cool IPv6 short noted addresses)?


Cet article a été publié par Frédéric de Villamil sur Le Rayon UX | Si vous l'avez lu ailleurs sans qu'un lien ait été fait vers l'article original, c'est qu'il a été reproduit illégalement.