In this week’s post the School of Applied Computing’s Senior Research Associate, part-time Lecturer and all-round Great Guy Tim Bashford tells us about his research activities:
I enrolled on a PhD having graduated from the University’s Software Engineering BSc Degree with first class honours. My PhD thesis topic is rooted heavily in computational physics, developing parallelism frameworks for numerical modelling of radiative, thermal and wave propagation. My areas of academic interest include software engineering, high performance computing, computational physics and robotics. I was born and raised in South Wales, and still live on the beautiful Gower peninsula. Outside of work, I am a keen photographer, and enjoy computer gaming, martial arts and swimming.
My research falls into the area of simulation; how light, specifically laser light, interacts with human tissue. There is a practical, clinical agenda for this research; lasers are now used throughout medical and cosmetic therapies for applications ranging from laser hair removal to the treatment of tumours. It is especially with consideration of cancer treatment that my research is focused, but with the potential for application to any of the listed areas.
The Monte Carlo Method
The Monte Carlo method was developed in the late 1940s as a method of repeated random sampling to obtain numerical results. In the case of particle physics, the Monte Carlo method was used to consolidate two distinct approaches for simulation of physical phenomena; low particle count classic mechanics problems simulated through differential equations and high particle count problems approached through statistical mechanics. By taking a probabilistic approach, the Monte Carlo method models multiple instances of the same measurable unit to obtain a numerical result without the need for more abstract thinking. By taking a profile of a particle and expressing it mathematically, representing the probability of a given, known event occurring, it is possible to create a computational model to simulate that particle, incorporating each potential event through the implementation of a pseudo-random number generator. By then simulating a sufficient number of instances of the implemented particle following random events, the probability inherent in the method will result in a theoretically accurate overview.
This process for simulating the subatomic particle of light, the photon, is already well established, but based on Cartesian geometry. This ultimately results in simulation of cuboid geometry which, while an acceptable overview, reduces the accuracy to the point that use of the data generated could introduce unnecessary risk in a clinical setting. As such, I have developed a model whereby accurate CT or MRI data may be simulated and damage to the cells within calculated. This theoretically will permit a medical doctor to test the outcome of a given laser treatment without risking the patient’s safety.
Due to the probabilistic nature of the Monte Carlo model, an inevitable relationship is formed between the simulated particle count, the accuracy of the result and the time taken. The extent to which this is true varies significantly by the degree to which random number generation features in the model; a simulation which is only nominally impacted by random elements requires a smaller particle count, while for a simulation on which random numbers make a significant impact the opposite is true.
As such, it is highly desirable to simulate as many photons as possible, however in doing so, a linear relationship is formed between photon count and time taken; if 10,000 photons take 1 minute then 100,000 photons will take 10 minutes. This is a fact of the Monte Carlo model, thus a focus on speed is instead centered on maximizing performance from hardware.
The University has its own 33 enclosure Transtec Windows High Performance Computing cluster, and an on-site 13 enclosure Fujitsu Windows/Redhat cluster provisioned by HPC Wales and it is through these supercomputers that simulations may be completed more quickly. I have therefore ported the algorithm and associated structures to utilise the Message Passing Interface (MPI) on high performance supercomputers, resulting in order of magnitude time improvements, thus improving the clinical accuracy of the model without impacting on time taken. The next step of this process is to port the model to the Compute Unified Device Architecture general purpose graphics processing framework, where, through use of an Nvidia Tesla module, further order of magnitude speed increases are predicted.