Installation

Hardware Requirements

The pipeline is implemented in the genericpipeline framework. It is designed to run non-interactively on a cluster via submission to a job queue. The pipeline has been tested on the following type of computing node:

  • 2 socket x 16 core (32 threads) 2.10 GHz
  • 192 GB RAM
  • FDR Infiniband
  • 100 TB disk space

For basic pipeline profiling, please see Appendix A in Morabito et al. (2022). While the configuration can be adapted to your particular cluster specifications, we recommend at least 32 cores and 192 GB RAM. Larger number of cores will help reduce the runtime of the pipeline.

The total data volume will reach about 2.5 times that of the raw dataset downloaded from the LTA. If the data is dysco compressed, it will be between 4-6 TB (depending on the number of international stations participating) meaning you will need 10 - 15 TB available. A pre-dysco compression dataset will be around 20 TB and you will need about 50 TB of available disk space.

Note

Do not forget to check whether your data is dysco compressed! When you stage your data at the LTA you will get a summary of how big it will be. You will need 2.5 times this size in disk space.

Software Requirements – with Singularity

This is the recommended method to run the pipeline. You will need the following the following:

  • An appropriate Singularity image. You may use another one but be aware that there may be software compatibility issues. We recommend:
  • The lofar-vlbi github repository (master branch):
  • The prefactor github repository (see note about aoflagger):
  • The facet self-cal github repository:
  • The lofar_helpers github repository:

Software Requirements – without Singularity

If for some reason you are not able to use Singularity, please contact us for instructions.