WikiToLearn 0.7 relased!

Hi! Yesterday was a good day, in some ways.

WikiToLearn 0.7 was relased with the new WikiToLearnHome env.

The process was not easy as i hoped but at the end of the day we got the new system up and running.

WikiToLearn Home was not ready as i expected but with some patches we finally got evrything work, i think we had some issues due to the console locales.

Now we are online in the GARR  network and this can help us to keep wiki at the highest level, such as the ability to push backup in the internal backbone around the country with the limit of the disk speed.

Funny part was the “apt-get” process, using the GARR mirror for debian pakages the download was like a LAN transfer with the high performance network mentioned before.

The new system is hosted in Bari, near the mirror (i think the actual machine for the mirror is the one over our system), this means that we can download like every single software for ubunu, debian, centos, etc. without leaving the building.

Therefore, with this kind of resources we can think about new ways to improve the user experience and functions offered.

By the way, in this new relase we have some big news also inside the code (this is where you can find all about).

Today was a day to try to prevent a massive spam attack.

Now i have to go to sleep, tomorrow is another day.

“I would like to change the way”

Hi, today (2 days after the end of the Sprint) I’d like to talk about what happended last week at CERN.

The goal of the Sprint was the operation1000, this means that we had to re-think about our job, our way to manage content and the development process for the tech stuff.

Operation1000 is about creating an infrastructure (technological and cultural) that allows us to have 1000 people in the core of our project, I’m working on the technical site of the operation.

Seven days of work don’t seem to be a huge amount of time but it turned out it can be enough to change and be the key for the future.

My job in the Sprint was trying to support other developers. As sysadmin I had to work out how to get everything up and running for every developer in the easiest way.

Me, Dario, Davide and Christian have worked on the scripts that in the next future will manage all the WikiToLearn stuff, such as the rollout scheme, setup the development env, staging process, etc.

Because I was the only sysadmin inside the project I had to tell to my novices about my job because before now I’d never written documentation about what I was doing.
Don’t worry, luckily this is now going to change.

I hope that all this will help WikiToLearn to be ready for the operation1000.

I’d like to thank KDE e.V. for making this Sprint possible.

See you soon!

Hug the LHC

Hi, today at CERN we went to the CMS to understand how the big scientists have found the Higgs boson and to see the greatest machines operating in that location.

lucatoma-hug-lhc

We visited the CMS experiment and it was awesome!

We passed through retina based authentication, elevators up to 80 meters high, and at the end of the cave there it was: the CMS gigantic machine.

After the hardware stuff under the ground we saw the data center (#1 level of triggering) and the control room, where we found Plasma 4.2 running on those machines!

I think we can only hug the LHC and love the science behind it!

First day @CERN

Hi, today is my first day at CERN.

After six hours of travel we finally arrived at CERN.

The journey was crazy, crossing France and Switzerland, passing through the Mont Blanc and Geneve.

The CERN access is like a border and this makes sense because CERN is a neutral territory.

By the way: CERN facilities are awesome, for example the idea² or the cafeteria.

This day was amazing and i think next days all will get even better.

Stay tuned!

The decalogue of the sysadmin

Hi everyone, today i want to talk about the WikiToLearn’s relase strategy.

The basic idea: data is the most important thing, after data there is data access (read and write).

One thing to know about wiki’s structure: it is based on docker (1 webserver, 1 database, 1 mathoid, 1 memcache, ecc ecc).

All these servers are exposed to internet through a docker with haproxy that also deal with the encryption.

Dockers have a name, in the default setup the name starts with “w2l-dev” and this name is for the development environment.

In the scripts the override of this name is permitted,  to allow multiple instance co-exist in the same host up and running at the same time.

These things allow us to use this relase script:

  1. put the site in read only state
  2. create a backup
  3. create the new environment with the new version of software
  4. restore the last backup
  5. do updates on database
  6. bring up the new instance with an haproxy restart

Can only work because of docker and if size of backups isn’t too big.

What benefits from this stuff?

So, the best thing with this script is about the rollback: we don’t have to modify the env before we are successfully done with the relase.

Another thing is the site uptime: with this way to deploy we can be online during the entire process. Someone can say “but you are readonly!” and my replay is “yes, but not for 1 week, only for some minutes”.

I think all the downsides are a reasonable price to meet the Decalogue of the sysadmin:

  1. Do backups
  2. Do backups
  3. Do backups
  4. Do backups
  5. Do backups
  6. Do backups
  7. Do backups
  8. Do backups
  9. Do backups
  10. Do backups

Not to mention the golden rule “Make backups , stupid!”.

For today is all, see you the next time.

Ciao, I’m a sysadmin

Hello everyone, I’m Luca Toma, I was stutent at the ITIS E. Mattei in Sondrio and now I’m student of physics at the University of Milano-Bicocca and today I want to talk about my contribution in WikiToLearn.

Let’s start with a bit of history: six months ago Riccardo (aka ruphy) told me about a project he was developing: WikiToLearn.

The first thing I wanted to do is be able to realize a reproducible development environment and so I started writing scripts that gave a clean system they were able to create a system ready for use for the wiki. Fortunately a Rafael (another KDE developer) suggested me to try a beautiful tool: Docker.

At this point my work has changed: I switched from working with virtual machines and containers lxc to docker (initially inside some virtual machines) and after a few days of total despair something working and after a few weeks I was able to run the WikiToLearn environment in the containers.

Meanwhile, the team began to grow and, with a little of hard work, everyone was be able to run the environment on their PCs during the first sprint or in the following days.

My task in this project is to provide to developers the tools needed to quietly write, test and release their code in a safe environment, preventing possible dataloss.

WikiToLearn has new challenges: figure out how to run the site for a growing number of users with an increasing number of functions and developers.

These new challenges require new solutions and I hope to find them and put them in place during “KDE @ CERN” sprint which will be held February 7 to 13.

These days I am studying about tools like openstack, puppet, swarmkubernetes, ecc. and all this to deal with these new challenges.

For now is all, at the next time!

Hello World

Ciao a tutti, questo è il mio primo post sul mio blog.

Ho deciso di aprire questo blog per condividere con tutti la mia passione per l’informatica, in particolare per quello che riguarda il mondo dei server e delle reti.

Attualmente partecipo al progetto WikiToLearn (http://www.wikitolearn.org/) con ruolo di amministratore di sistema.

Spero di scrivere qualche cosa di interessante, alla prossima!