Dimitris and Kristina are an incredible duo set to find ways of teasing out critical information from enormous data sets in order to deliver actionable insights to UK rare disease patients.
This blog post is essentially a list of things I wish I knew before I started making an nf-core pipeline, which would have made things easier.
Much of the work in genomics over the last two decades has been focused on the research environment, whether it be sequencing genomes of different species, or identifying changes in the sequence, structure or expression of genomes.
Dr. Martin Steinegge has always been obsessed with: how can we get insights from analysing massive amounts of metagenomics data in only minutes, instead of the days and weeks that the process currently takes.
The main goal of nf-core is to establish best practices concerning pipelines that are built with Nextflow.
On paper, cloud computing is a relatively straightforward model which allows access to compute and storage resources. However, the reality of cloud pricing can be quite complex.
Although the issues associated with sequencing have mostly been resolved, the raw data generated from the sequencing process is useless unless you can give it meaning. That’s where the unsung superheroes come in: bioinformaticians.
Over the last few years open-source has been increasingly becoming the norm, even in Bioinformatics. The number of high-quality applications which are freely available on GitHub and other Git providers is increasing, such as the pipelines that the Broad institute uses for production.