Configuring Linux 4.x or higher – HiFiBerry https:/
Configuring Linux 4.x or higher – HiFiBerry https:/
Using MySQL with GitLab CE Omnibus package https:/
Also http:/
Automatically open remote files in local emacs – Andy Skelton on WordPress https:/
Nemo's a nice file manager built on top of GNOME Files, making it easier to use and somehow prettier.
In the link, there's a one-liner to prevent nemo from drawing a desktop (which is useless and annoying for i3 users like me).
1 min read
Here are some links about using modern Fortran:
5 min read
In science we often have to process large datasets. For that reason, the performance of a langage has always been the most important criteria. However, even though using compiled langages like Fortran or C is the best choice performance-wise, there are a lot of good reasons to use an interpreted langage like instead. I will give a general overview of why you could use an interpreted langage instead of your favourite compiled langage and why it's a good idea. I'll dive into more details in a second part, detailing my own setup using Python.
Interpreted langages, as opposited to compiled langage do not need a compilation step to be run. When dealing with data, you frequently encounter heterogenous data, or data from different sources that need similar yet different codes.
Imagine you're working on the correlation between the weather and the amount of pollution in the air. You'd have a dataset giving you the weather (rainy, sunny, …) as a function of time as well as another dataset giving you information about air pollution. Using a compiled langages, you would probably :
Ccompiled langage become very good when the time required for steps 3, 7 and 12 is bigger than the time required for all the other points.
If you're using an interpreted langage though, you would do (or I think you should do!) something like:
This is achieved thanks to the fact that once loaded, you don't need to reload data. For small datasets, this is not of much help but when loading start taking seconds, it's a real pain to have to reload it each time. One final advantage is also that you have plotting ability and high-level data analysis all in the same place.
My python setup is heavily relying on the jupyter notebook http:/
Jupyter is a web application that help you manage your project. In jupyter's notebooks, you write your code in cells that you can then execute. There a lot of reasons why I think jupyter is amazing.
Jupyter is actually language independant. I'm using it as a frontend for python (using ipython), but it can embed a ridiculous amount of different languages: C, Julia, R, OCaml, Java, C#, …
Since the notebook is only a webpage, anyone who has a web browser can actually see your notebook ; it means sharing your results is as easy as one click. This is especially useful since you can do literate programming, detailing the step of your analysis in the middle of your code.
The notebooks provided by jupyter are based on a run-per-cell method. This means that you write code in a cell, then execute the cell, creating some output and eventually populating the global scope. But you can also insert "formatted text" cells, with embedded LaTeX formulae, images, movies, … Whatever a web-browser can handle!
Each jupyter cell is independant and the way it is executed can be customized using 'magics'. This extremely powerful. For example, I had to write to find the eigenvalues of 200,000 matrices. Using jupyter, I only added '%%cython' at the beginning of my cell, turning it into Cython-compiled code! All the functions in the cell are then available to the global scope without needing to create another file, compile it, import it, …
I also use the magics to run cells on different jupyter instances, to wrap the cell in a function (to prevent it from polluting the scope), to dump data, …
If you add this line in a cell and execute it, any figure generated by matplotlib will be drawn in the browser, with interactive controls (zoom, move, download plot).
Enjoy!