SSH and complex configs

Hi!

Today I want to talk about the .ssh/config file, for who don’t knows about it is the configuration file for SSH to customize options to connect with SSH.

The issue with this file is: it don’t supports some kind of “include”, this can be an issue if you have to write long config file.

I wrote a bit of shell script to workaround this (you can see the script here https://quickgit.kde.org/?p=scratch%2Ftomaluca%2Fssh-build-config.git).

This script creates the .ssh/config reading slice of config from .ssh/config.d/ in order and recursively.

I hope to be helpfull for someone.

Come gli algoritmi dei social network distruggono la nostra percezione della realtà

Buonasera, oggi volevo parlarvi di una cosa molto importante della quale, forse, non tutti sono pienamente coscienti.

Un sito come facebook o twitter raccolgono ogni secondo gigabyte di informazioni da un numero sterminato di fonti, pensare che tutti ricevano le informazioni degli amici e delle pagine seguite è folle.

Per evitare questo tipo di “bombardamento” in questi siti viene implementato un meccanismo che permette di “selezionare” solo i posto “affini” alla persona.

Questo crea un ambiente amichevole, un posto nel quale ci piace stare e questo è in accordo con gli obbiettivi di una piattaforma che trae profitto dalle pubblicità.

Il problema esiste nel momento in cui l’ambiente è amichevole a tal punto che in sostanza vediamo unicamente ciò con cui siamo in perfetto accordo.

Questa visione filtrata della realtà crea un’illusione che chiunque sia d’accordo con noi, che ciò che pensiamo sia il pensiero comune, questo può portare a rafforzare credenze sbagliate o consolidare idee folli.

È giunto quindi il momento di sforzarsi a cercare ciò che non ci piace, a crearci il nostro contraddittorio per evitare di perdere completamente ed inesorabilmente il contatto con la realtà.

Buona serata, alla prossima

“Once you stop learning you start dying”

Hi!

The quote “once you stop learning you start dying” is from Albert Einstein and I’d love to explain why he was right.

In the first place, Since I was 10 years old I started my journey in to the IT world as I was learning new things, I discovered new possibilities to continue to learn. Today, after 10 years, the situation has not changed in any way .

The big problem about keep learning is about finding a mentor to help you with what you want to learn or a reliable source of content.

Talking about the distributed architecture of the network, it is not very hard to find good materials with some kind of peer reviews for a lot of subjects.

However, there is an hidden truth about the net: somewhere in the world we need a server in a datacenter.

This is why in WikiToLearn we are trying to involve many people such as students, teachers and researchers.

I believe that this is the reason why we can offer something useful, it is a merge of two worlds and this can be extremely powerful to spread the knowledge in its highest forms.

I hope to keep learning forever, because I know that out there, there are stuff that i cannot even image today and, sadly, maybe neither tomorrow.

Ansible automation tool

Hi!

This days I’m working to improve my skils about prepare, test and deploy complex IT systems like mail servers or database cluster.

To acomplish this I started using ansible to speed up the operation.

With ansible is quite easy setup a configuration template and the procedure to bring up the new service or re-configure an existing one.

Unlike other automation tools like puppet it don’t require any kind of specialized server, it uses ssh to accesso to all servers and this can be a good solution also to firewall/network ACL issues.

I’m thinking about migrate all my sh script to ansible structure but first I have to make some test.

Bye!

I’m in the GARR Workshop 2016

Hi everyone! Today I’m at the GARR Workshop 2016, happening within the CNR headquarters in Rome, and I just presented to the audience how in WikiToLearn we work to develop our project in all tech stuff.

GARR WorkShop 2016

I was invited to deliver a talk about WikiToLearnHome, our dev ops infrastructure and automation system.
Tomorrow Riccardo will deliver a talk within the plenary track to introduce WikiToLearn to the 300+ university representatives who came to learn about innovation in digital education.

WikiToLearn 0.7 relased!

Hi! Yesterday was a good day, in some ways.

WikiToLearn 0.7 was relased with the new WikiToLearnHome env.

The process was not easy as i hoped but at the end of the day we got the new system up and running.

WikiToLearn Home was not ready as i expected but with some patches we finally got evrything work, i think we had some issues due to the console locales.

Now we are online in the GARR  network and this can help us to keep wiki at the highest level, such as the ability to push backup in the internal backbone around the country with the limit of the disk speed.

Funny part was the “apt-get” process, using the GARR mirror for debian pakages the download was like a LAN transfer with the high performance network mentioned before.

The new system is hosted in Bari, near the mirror (i think the actual machine for the mirror is the one over our system), this means that we can download like every single software for ubunu, debian, centos, etc. without leaving the building.

Therefore, with this kind of resources we can think about new ways to improve the user experience and functions offered.

By the way, in this new relase we have some big news also inside the code (this is where you can find all about).

Today was a day to try to prevent a massive spam attack.

Now i have to go to sleep, tomorrow is another day.

“I would like to change the way”

Hi, today (2 days after the end of the Sprint) I’d like to talk about what happended last week at CERN.

The goal of the Sprint was the operation1000, this means that we had to re-think about our job, our way to manage content and the development process for the tech stuff.

Operation1000 is about creating an infrastructure (technological and cultural) that allows us to have 1000 people in the core of our project, I’m working on the technical site of the operation.

Seven days of work don’t seem to be a huge amount of time but it turned out it can be enough to change and be the key for the future.

My job in the Sprint was trying to support other developers. As sysadmin I had to work out how to get everything up and running for every developer in the easiest way.

Me, Dario, Davide and Christian have worked on the scripts that in the next future will manage all the WikiToLearn stuff, such as the rollout scheme, setup the development env, staging process, etc.

Because I was the only sysadmin inside the project I had to tell to my novices about my job because before now I’d never written documentation about what I was doing.
Don’t worry, luckily this is now going to change.

I hope that all this will help WikiToLearn to be ready for the operation1000.

I’d like to thank KDE e.V. for making this Sprint possible.

See you soon!

Hug the LHC

Hi, today at CERN we went to the CMS to understand how the big scientists have found the Higgs boson and to see the greatest machines operating in that location.

lucatoma-hug-lhc

We visited the CMS experiment and it was awesome!

We passed through retina based authentication, elevators up to 80 meters high, and at the end of the cave there it was: the CMS gigantic machine.

After the hardware stuff under the ground we saw the data center (#1 level of triggering) and the control room, where we found Plasma 4.2 running on those machines!

I think we can only hug the LHC and love the science behind it!

First day @CERN

Hi, today is my first day at CERN.

After six hours of travel we finally arrived at CERN.

The journey was crazy, crossing France and Switzerland, passing through the Mont Blanc and Geneve.

The CERN access is like a border and this makes sense because CERN is a neutral territory.

By the way: CERN facilities are awesome, for example the idea² or the cafeteria.

This day was amazing and i think next days all will get even better.

Stay tuned!

The decalogue of the sysadmin

Hi everyone, today i want to talk about the WikiToLearn’s relase strategy.

The basic idea: data is the most important thing, after data there is data access (read and write).

One thing to know about wiki’s structure: it is based on docker (1 webserver, 1 database, 1 mathoid, 1 memcache, ecc ecc).

All these servers are exposed to internet through a docker with haproxy that also deal with the encryption.

Dockers have a name, in the default setup the name starts with “w2l-dev” and this name is for the development environment.

In the scripts the override of this name is permitted,  to allow multiple instance co-exist in the same host up and running at the same time.

These things allow us to use this relase script:

  1. put the site in read only state
  2. create a backup
  3. create the new environment with the new version of software
  4. restore the last backup
  5. do updates on database
  6. bring up the new instance with an haproxy restart

Can only work because of docker and if size of backups isn’t too big.

What benefits from this stuff?

So, the best thing with this script is about the rollback: we don’t have to modify the env before we are successfully done with the relase.

Another thing is the site uptime: with this way to deploy we can be online during the entire process. Someone can say “but you are readonly!” and my replay is “yes, but not for 1 week, only for some minutes”.

I think all the downsides are a reasonable price to meet the Decalogue of the sysadmin:

  1. Do backups
  2. Do backups
  3. Do backups
  4. Do backups
  5. Do backups
  6. Do backups
  7. Do backups
  8. Do backups
  9. Do backups
  10. Do backups

Not to mention the golden rule “Make backups , stupid!”.

For today is all, see you the next time.