The great genome sequencing rush
Data is the new gold and, with the cost of sequencing steadily dropping, scientists are digging the genome goldmine with relentless enthusiasm. Automatization of the process workflow from DNA purification to data analysis and generation has introduced routine high-throughput processing, unleashing possibilities that go far beyond what anyone could expect just a few years ago. As a result, the field of genomics has expanded exponentially, affecting virtually every aspect of the biosciences.
Which kind of scientist are you?
Have you set up a reliable and optimized system for high-throughput nucleic acid purification, but suddenly the company providing the instrumentations and reagents announces that they are going to discontinue the products and the service? There might be some very valid reasons why that specific provider took that decision, but the net result is threatening your operations, and you need to make some critical decisions moving forward. What are you going to do now?
Batch effects and the reproducibility of genomic studies
Data reproducibility has recently become one of the most debated topics in the scientific community. The lack of reproducibility has been attributed to a combination of many different factors ranging from poor data collection practices to wrong and misleading analysis methodologies. Due to practical issues linked to the massive number of samples that are analyzed, the field of high-throughput genomics is particularly vulnerable to the problem of data reproducibility. A significant factor contributing to the lack of data reproducibility in genomics is represented by the so-called batch effects.