Skip to main content

Use modern Fortran

1 min read

Here are some links about using modern Fortran:

  1. "Object Oriented Programming with Fortran 2003", from They wrote 4 very good tutorial for a comprehensive dive-in into modern Fortran developping, especially focused on OOP programmation
    1. Object Oriented Programming with Fortran 2003 Part 1: Code Reusability
    2. Object Oriented Programming with Fortran 2003 Part 2: Data Polymorphism
    3. Object-Oriented Programming in Fortran 2003 Part 3: Parameterized Derived Types
    4. Object-Oriented Programming in Fortran 2003 Part 4: User-Defined Derived Type Input/Output
    5. and also Fortran Array Attributes: Pointer and Allocatable, Contiguous and Target that explains why you should use 'allocatable' instead of 'pointer' whenever it's possible.
  2. "Fortran Best Practices", a sort of cook-book to prevent bad coding practices in Fortran.



Some thoughts on interpreted langages vs. compiled langage

5 min read

In science we often have to process large datasets. For that reason, the performance of a langage has always been the most important criteria. However, even though using compiled langages like Fortran or C is the best choice performance-wise, there are a lot of good reasons to use an interpreted langage like instead. I will give a general overview of why you could use an interpreted langage instead of your favourite compiled langage and why it's a good idea. I'll dive into more details in a second part, detailing my own setup using Python.

Interpreted langages, as opposited to compiled langage do not need a compilation step to be run. When dealing with data, you frequently encounter heterogenous data, or data from different sources that need similar yet different codes.

Classical use case

Imagine you're working on the correlation between the weather and the amount of pollution in the air. You'd have a dataset giving you the weather (rainy, sunny, …) as a function of time as well as another dataset giving you information about air pollution. Using a compiled langages, you would probably :

  1. write a bunch of code that reads the weather dataset
  2. compile it
  3. run it and wait for the output and go back to 1 to correct errors, …
  4. write a bunch of code that reads the pollution dataset
  5. compile it
  6. run it and wait for the output, raging because you are reloading and recomputing everything from the first dataset, and go back to 1 to correct errors, …
  7. cheer because you've loaded everything
  8. write a bunch of code that actually does something of your data
  9. compile it
  10. run it, and go back to 1, until you have your data loaded and analysed
  11. wonder how to plot your data

Ccompiled langage become very good when the time required for steps 3, 7 and 12 is bigger than the time required for all the other points.

If you're using an interpreted langage though, you would do (or I think you should do!) something like:

  1. load your payload of external libraries
  2. write a bunch of code to read your weather dataset
  3. launch it and correct your error, going back to 2
  4. write a bunch of code to read your pollution dataset
  5. launch it and correct your errors, going back to 4
  6. write a bunch of code that actually does something of your data
  7. run it and correct your errors, going back to 6
  8. do your plots in the langage you've been using so far

This is achieved thanks to the fact that once loaded, you don't need to reload data. For small datasets, this is not of much help but when loading start taking seconds, it's a real pain to have to reload it each time. One final advantage is also that you have plotting ability and high-level data analysis all in the same place.

Python rocks

Snake on a rock

My python setup is heavily relying on the jupyter notebook with a combination of pandas for data loading and analysis, numpy for mathematics and matplotlib for the plots. Here is an overview of the features of Jupyter I'm using. The python kernel for jupyter is called ipython.


Jupyter is a web application that help you manage your project. In jupyter's notebooks, you write your code in cells that you can then execute. There a lot of reasons why I think jupyter is amazing.

Jupyter magic

Jupyter is actually language independant. I'm using it as a frontend for python (using ipython), but it can embed a ridiculous amount of different languages: C, Julia, R, OCaml, Java, C#, …

Since the notebook is only a webpage, anyone who has a web browser can actually see your notebook ; it means sharing your results is as easy as one click. This is especially useful since you can do literate programming, detailing the step of your analysis in the middle of your code.

The notebooks provided by jupyter are based on a run-per-cell method. This means that you write code in a cell, then execute the cell, creating some output and eventually populating the global scope. But you can also insert "formatted text" cells, with embedded LaTeX formulae, images, movies, … Whatever a web-browser can handle!

Jupyter %magic

Each jupyter cell is independant and the way it is executed can be customized using 'magics'. This extremely powerful. For example, I had to write to find the eigenvalues of 200,000 matrices. Using jupyter, I only added '%%cython' at the beginning of my cell, turning it into Cython-compiled code! All the functions in the cell are then available to the global scope without needing to create another file, compile it, import it, …

I also use the magics to run cells on different jupyter instances, to wrap the cell in a function (to prevent it from polluting the scope), to dump data, …

%matplotlib notebook

If you add this line in a cell and execute it, any figure generated by matplotlib will be drawn in the browser, with interactive controls (zoom, move, download plot).





Using Emacs for Web developpement

1 min read

Here is a list of extensions I'm using for web development :


Use your server to stream music

2 min read

My computer is an ultrabook shipped with a nice 120Gio SSD drive. Unfortunately, I don't have any room for music. I however still have an hard drive with dozens of Gio of nice music, but it's very unconvenient to carry it around (and it's not really useful to have an ultrabook if you have to carry what couldn't fit inside it).

The solution I found was to put all my music on a server (for example a RaspberryPi), and use it to stream the music using MPD (to control the music, have a database) and Icecast (to stream it).

First install both of them sudo aptitude install mpd icecast. You then have to configure mpd by editing the file /etc/mpd.conf. Replace in the following YOURDOMAIN by the domain name of your server (or its IP address) and PASSWORD1 and PASSWORD2 :

music_directory                 "/var/lib/mpd/music"
user                            "mpd"
bind_to_address                 ""

auto_update "yes" follow_outside_symlinks "yes" follow_inside_symlinks "yes"

password "PASSWORD1@read,add,control,admin" … audio_output { type "shout" encoding "ogg" name "My Shout Stream" host "localhost" port "8000" mount "/mpd.ogg" password "PASSWORD2" quality "7.0" format "44100:16:1" protocol "icecast2" user "source" public "yes" timeout "2" } audio_output { type "shout" encoding "ogg" name "My Shout Stream - HD" host "localhost" port "8000" mount "/mpd_good.ogg" password "PASSWORD2" quality "10.0" format "44100:16:1" protocol "icecast2" user "source" public "yes"
timeout "2"

You then have to configure icecast to stream mpd to the internet. For this, edit /etc/icecast2/icecast.xml and modify the section . PASSWORD2 have to match PASSWORD2 defined before.





Moreover, if you have a domain name, change the line containing "" to match it

Well this is pretty most it, you will then have access to your stream through and If you want to access remotely your mpd, install a client like sonata, and do not forget to specify the password, which is PASSWORD1.


SMS Synchronization

1 min read

I was recently wondering how I could sync my SMS with my computer, my server, multiple devices. I tried to find an open source app that would to the trick and well, I found half ot it!

SMSSync is an android app that syncs your messages using simple HTTP requests, i.e. each time you receive a message, it contacts a server with the content of the message, the sender, … The only issue is that no server is proposed. At the very end of the developer section, you can find a startpoint of a server, that you still have to install and is nowhere near a solution for lambda users.

So I developed my own one, using django as framework. For now, it allows the SMS Sync application to sync all SMS, stores it in a SQLite database and can manage multiple users/multiple phones.

I still need to change the administration (right now, every one can modify everything, which is not very good …). Feel free to send me feebacks, even though no one will ever read this.

You can find the sources on my git,


Remote viewing of session through VNC and reverse SSH

2 min read

Imagine you have a computer behind a router, called remote and a computer with a physical access, called local. What if you want to see remote's X session (an X session is a graphic session in Linux)?

One way to do it is to :

  1. initiate a "secure tunnel" from remote to local, using the "reverse" ability of SSH
  2. initiate a connection from local to remote using the tunnel we just opened
  3. tell VNC to use the last connection to get the remote X session

To prepare the ground, we first have to install a vncserver on the remote machine. I installed x11vnc. If you're running ArchLinux, we can find all the information on To configure it :

    1. Install it through your package manager (for example# apt-get install x11vnc on Ubuntu/Debian)
    2. Generate a password
      $ mkdir ~/.x11vnc
      $ x11vnc -storepasswd password ~/.x11vnc/passwd
    3. In a running X session, run the vnc server $ x11vnc -display :0 -rfbauth ~/.x11vnc/passwd
    4. Finally, initiate a reverse SSH tunnel to the local machine $ ssh -R 12345:localhost:22 local_user@local_adress

On the local side, do as follow.

  1. Install a viewer on the local side (for example# apt-get install vncviewer on Ubuntu/Debian)
  2. Initiate a connection to distant through the tunnel $ ssh distant_user@localhost -p 12345 -L 5901:localhost:5900 
  3. Finally, you can launch the viewer $ vncviewer localhost:1, the password you entered in point 3 of the configuration of the server is going to be asked.

You should now have the distant session opened in a window on the local machine!