PKI is needed for micro-services

Hello!

Today I want to explain why I think that for a proper micro-service software a PKI is needed.

First the problem: when you have a network application you need a way to authenticate one service to another and verify a service to another.

One way to do this is with usernames and passwords or tokens. This solution works well but there is an issue about where to store the secret data, how to deploy the secret data to all nodes in a secure way and how to revoke access to only one node.

When you are using only usernames/passwords or tokens, it is kind of a mess and you have to write everything to a config file. Revocation is not easy and needs good orchestration to avoid downtime.

PKI is a strong and standard way to have mutual authentication between two endpoints.

Managing a CA is not an easy task but the effort pays off if you care about security and you want to avoid a big spaghetti-style security approach.

Someone would say: but we can trust the source IP!
The short answer to this is: no.

The long answer is: no! no! no! no! no! no! no! no! no!

An IP address is not secure by design, the network can be manipulated quite easily with an L2 access (like one server compromised).

Also, the IP layer is not encrypted by default, so if you have to use some kind of encryption on top in your application, what’s the point of encrypting everything with a pre shared key when you can use an asymmetric layout?

I hope I’ve made my point and that you will use PKI for your next micro-service application.

Bye!

The magic of ~/.ssh/config

Hi, today I want to talk about the ~/.ssh/config file.

First thing about this magic file: if you are using ssh you must have this file, this is a fact.

For example, i use git with ssh, because ssh is a very good protocol, and to use git over ssh we don’t use the TTY, so we can put in the config file something like:

Host git.kde.org
 User git
 RequestTTY no

In this way I can execute

ssh git.kde.org

without seeing the annoing “PTY allocation request failed” message.

Another sorcery happens when you have multiple ssh keys, I have one key for each “scope”, for example: one for KDE, one for GitHub, one for GitLab, one for my home, ecc.

I don’t want to use the “-i” option each time to select the right key, this is why I use the IdentityFile option, for example applyed to “*.kde.org”.

Sometime I have to connect to a server without direct access to the sshd daemon with a direct TCP connection, in this scenario i use the “ProxyCommand” option, this is a command to execute to proxy the ssh connection via another host, for example “ProxyCommand ssh bastion.<domain> nc %h %p”.

The last useful thing is that you can create an alias for an host, for example I have my server with FQDN “srv.domain.tdl” listening on port 1900 TCP, I can create an alias, like “srv” using something as:

Host srv
 HostName srv.domain.tdl
 Port 1900

With this config I can run “ssh srv” and be on my server.

Thanks for reading.

Micro-services are only half the picture

Hello,

today I want to expose my my thoughts about the general hype for micro-services.

The first objection that one can move against this approach is that it does not really solve the problem of having the maintainable code because the same principles can be found in a lot of other paradigms that did not prevent bad software to be produced.

I believe that the turning point of the micro-services stuff is that is compatible with the devops philosophy.

With the combination of micro-services and devops you get software that has some reasonably well-defined limits and whose management is assigned to the people who developed the software.

This combination avoids of development shortcuts that make management more difficult (maintenance is a big deal).

This thing also solve one of the great IT open problems: the documentation.

It is true that it can not force us to produce documentation, but, at least, who run the code is exactly who produced it and i can guess that who writes the code knows how the code has to work.

It is now possible to build applications with high performance and functionality unimmaginable before, all this thanks to the fact that each component can be realized, evolved and delployed with the best life cycle that we are able to develop, without limiting the entire ecosystem.

Thanks for reading, see you next time

Hack your life!

Hi!

Today i want to encorage you to hack your life.
I’m not taliking about things like “open the wine with a CD”, I’m taliking about real hacks.

Hack something is about view this thing in a new way, in the way is never supposed to be viewed.

When you think to an hacker the first picture is somebody with a computer (or a smartphone) and most of the time is quite correct.

But just think what this person is doing: mainly he is tring to use a software in a strage way to get something new and I think evryone must do this with their life.
I’m tring to do so, tring to change my bad behaviour with something usefull to me.

This hacking is not easy, the real world is not like a software (for this at least), you can’t reset to a checkpoint, so hacking your life is quite dangerous.
But, sometime you have to try.

When you hack the real word you could find something funny, for example a “bug” in the common sense and this “bug” can be used to get to your goal.

Like in all scientific researches this hasn’t a clear usefull return on investment and this is the point.

The only way to find out what you will find is find it.

So…hack your life!

SSH and complex configs

Hi!

Today I want to talk about the .ssh/config file, for who don’t knows about it is the configuration file for SSH to customize options to connect with SSH.

The issue with this file is: it don’t supports some kind of “include”, this can be an issue if you have to write long config file.

I wrote a bit of shell script to workaround this (you can see the script here https://quickgit.kde.org/?p=scratch%2Ftomaluca%2Fssh-build-config.git).

This script creates the .ssh/config reading slice of config from .ssh/config.d/ in order and recursively.

I hope to be helpfull for someone.

Come gli algoritmi dei social network distruggono la nostra percezione della realtà

Buonasera, oggi volevo parlarvi di una cosa molto importante della quale, forse, non tutti sono pienamente coscienti.

Un sito come facebook o twitter raccolgono ogni secondo gigabyte di informazioni da un numero sterminato di fonti, pensare che tutti ricevano le informazioni degli amici e delle pagine seguite è folle.

Per evitare questo tipo di “bombardamento” in questi siti viene implementato un meccanismo che permette di “selezionare” solo i posto “affini” alla persona.

Questo crea un ambiente amichevole, un posto nel quale ci piace stare e questo è in accordo con gli obbiettivi di una piattaforma che trae profitto dalle pubblicità.

Il problema esiste nel momento in cui l’ambiente è amichevole a tal punto che in sostanza vediamo unicamente ciò con cui siamo in perfetto accordo.

Questa visione filtrata della realtà crea un’illusione che chiunque sia d’accordo con noi, che ciò che pensiamo sia il pensiero comune, questo può portare a rafforzare credenze sbagliate o consolidare idee folli.

È giunto quindi il momento di sforzarsi a cercare ciò che non ci piace, a crearci il nostro contraddittorio per evitare di perdere completamente ed inesorabilmente il contatto con la realtà.

Buona serata, alla prossima

“Once you stop learning you start dying”

Hi!

The quote “once you stop learning you start dying” is from Albert Einstein and I’d love to explain why he was right.

In the first place, Since I was 10 years old I started my journey in to the IT world as I was learning new things, I discovered new possibilities to continue to learn. Today, after 10 years, the situation has not changed in any way .

The big problem about keep learning is about finding a mentor to help you with what you want to learn or a reliable source of content.

Talking about the distributed architecture of the network, it is not very hard to find good materials with some kind of peer reviews for a lot of subjects.

However, there is an hidden truth about the net: somewhere in the world we need a server in a datacenter.

This is why in WikiToLearn we are trying to involve many people such as students, teachers and researchers.

I believe that this is the reason why we can offer something useful, it is a merge of two worlds and this can be extremely powerful to spread the knowledge in its highest forms.

I hope to keep learning forever, because I know that out there, there are stuff that i cannot even image today and, sadly, maybe neither tomorrow.

Ansible automation tool

Hi!

This days I’m working to improve my skils about prepare, test and deploy complex IT systems like mail servers or database cluster.

To acomplish this I started using ansible to speed up the operation.

With ansible is quite easy setup a configuration template and the procedure to bring up the new service or re-configure an existing one.

Unlike other automation tools like puppet it don’t require any kind of specialized server, it uses ssh to accesso to all servers and this can be a good solution also to firewall/network ACL issues.

I’m thinking about migrate all my sh script to ansible structure but first I have to make some test.

Bye!

I’m in the GARR Workshop 2016

Hi everyone! Today I’m at the GARR Workshop 2016, happening within the CNR headquarters in Rome, and I just presented to the audience how in WikiToLearn we work to develop our project in all tech stuff.

GARR WorkShop 2016

I was invited to deliver a talk about WikiToLearnHome, our dev ops infrastructure and automation system.
Tomorrow Riccardo will deliver a talk within the plenary track to introduce WikiToLearn to the 300+ university representatives who came to learn about innovation in digital education.

http://www.garr.tv/home/viewvideo/1012/gdl-cloud-a-storage-sviluppare-wikitolearn-dal-laptop-al-datacenter-ltoma-workshop-garr-2016-roma

WikiToLearn 0.7 relased!

Hi! Yesterday was a good day, in some ways.

WikiToLearn 0.7 was relased with the new WikiToLearnHome env.

The process was not easy as i hoped but at the end of the day we got the new system up and running.

WikiToLearn Home was not ready as i expected but with some patches we finally got evrything work, i think we had some issues due to the console locales.

Now we are online in the GARR  network and this can help us to keep wiki at the highest level, such as the ability to push backup in the internal backbone around the country with the limit of the disk speed.

Funny part was the “apt-get” process, using the GARR mirror for debian pakages the download was like a LAN transfer with the high performance network mentioned before.

The new system is hosted in Bari, near the mirror (i think the actual machine for the mirror is the one over our system), this means that we can download like every single software for ubunu, debian, centos, etc. without leaving the building.

Therefore, with this kind of resources we can think about new ways to improve the user experience and functions offered.

By the way, in this new relase we have some big news also inside the code (this is where you can find all about).

Today was a day to try to prevent a massive spam attack.

Now i have to go to sleep, tomorrow is another day.