Research IT

Research IT Club Returns

Our Research IT club returns next month! Come along to see our services in action through researcher presentations and to hear the latest research IT news. October sees Richard Unwin from School of Medical Sciences describe how we worked with him to build a searchable database for protein expression in Alzheimer’s Disease that has now been released online.


The first abstract for the event at 2pm on the 9th October in Rm 4.206 University Place is below and we ask all attendees to register for the event.

Updates from the Research IT teams

  • Research Lifecycle Project (RLP)
  • Research Infrastructure
  • Research Software Engineering

Producing a database of protein expression changes in human Alzheimer’s Disease

Richard Unwin, Division of Cancer Sciences, School of Medical Sciences

Alzheimer's disease (AD) is a progressive neurodegenerative disorder affecting 36 million people worldwide with no effective treatment available. Development of AD follows a distinctive pattern in the brain. Therefore, it is vital to widen the spatial scope of the study of AD. We used mass spectrometry to generate data on relative expression of ~5,000 proteins in six brain regions between AD brains and healthy, age-matched controls. These data provide critical insights into the progression of disease, including novel AD-related pathways, and effectively communicating complex data with the broader field is key. As such, we collaborated with Research IT to design and build a searchable database which contained both numerical and graphical data for every single protein in every region studied, providing a rapid and accessible way for researchers to interact with this complex dataset.

Condor Cloud: Accelerating material discovery

Daniel Reta, Dept. of Chemistry, School of Natural Sciences

When talking about the next generation of data storage with ultra-high capacities, single magnetic molecules, capable of retaining information at the molecular level, are ideal contenders. Unfortunately, even the best examples can only do that below liquid nitrogen temperatures. Thus we are developing computational methods to understand what leads to information loss, hoping to improve their performance. To do that, for each molecule under study, we need to run on the order of 30k independent calculations (each taking ~4 hours on a single core in the CSF), in what constitutes an embarrassingly parallel problem. This type of high throughput computing is ideal for the UoM Condor service, but the scale of our problem was too large for on-site solutions. Luckily, the recently developed Condor cloud bursting service proved ideal to this task - by extending the Condor pool into AWS we were able to run ~3k cores simultaneously, resulting in the successful completion of this otherwise intractable problem.