Method Special: Assay automation

Essential Assistant or Jobkiller?
by Steven Buckingham, Labtimes 02/2017



Despite being somewhat well-worn, it still remains a popular topic of conversation. “Will robots take over our jobs?” A common reply from the pro-tech crowd is that robots will take on the boring bits of our jobs, leaving us with the interesting bits that they can't do, such as being creative.


If you want a good illustration of that optimistic prophecy, take a look at the world of assays. After all, assays are all about grunt work. Take 300 microlitres of each sample, put it in each well of a 1,536-well plate, add reagent B, incubate for four hours and transfer to another plate. No wonder assays lean so much towards automation – this is just where automation is really needed, and this is also just where it is easiest to implement: running stereotyped protocols in a predictable environment.

If you want to automate an assay, you have to solve two different but related problems. The first hurdle to get over is the problem of manipulation – moving materials or objects from one place to another quickly and reliably. Machines that move materials from one assay plate to another, rather pretentiously called “robots”, have already been a familiar sight for many years now, while other more mundane examples of automation are taking place below the radar of our attention every day. Think of that quiet, unassuming industry of the plate reader, or even the way that old spectrophotometer at the corner of the bench selects the appropriate wavelengths according to your programming.

Assay automation has a surprisingly long history – way back in the 1950s, routine clinical screens were being performed by a machine called an AutoAnalyzer, superseded at the end of the decade by the “Robot Chemist”. This first generation of assay automation basically just did the same things over and over, applying the same analysis to many samples in parallel, or rapidly performing analyses on many samples in series.

Programmable robotics

The last two decades, however, have seen a remarkable increase in the flexibility of automation and this has been because of two key developments. The first of these developments is the rise of programmable robotics. For some time now, plate handlers and dispensers have had the capacity to be instructed to follow an arbitrary sequence of commands. But now, even these programmable robots have gone a step further. Engineers are making legendary advances in robotics and artificial intelligence, and current lab automation devices are on the market that use vision to direct their movements, learn from the user, recover intelligently from errors and even advise the user about the design of the protocol – see the Spinnaker Microplate Robot from ThermoFisher, for example. Many new features are fast becoming a standard, such as dynamic scheduling. Many protocols require waiting time and many robots are able to make scheduling decisions to maximise the efficiency of runs.

But it is not just pipetting and dispensing that are becoming automated, these days. For instance, companies like Hudson Robotics now offer automated colony pickers that pick out cell colonies according to the user's specifications, recording an image of the colony as a record.

The second key development in assay automation has been the rise of microfluidics and, more generally, the widespread improvements in the precise handling of very small fluid volumes. Microfluidic chips have been made out of glass, thermoplastic or polymers and when combined with advanced flow-control technology, they allow for incredibly rapid screening, or indeed, any other small-scale fluidic reactions.

As a result, there has been a rapid growth in the number of companies offering bespoke microfluidic chips along with all the associated accessories (pumps, switches and software). Assays can be run in tiny droplets, kept separate by being suspended in oil and pushed through a labyrinth of tubes and chambers. Careful construction of the chip allows droplets to be split, merged or sorted, allowing thousands of reactions, and of different kinds, to be processed in parallel. This technology has coupled very well with PCR to give rise to digital PCR, in which thousands of PCR amplifications involving just one original DNA molecule bring to the table an unparalleled quantitative accuracy and sensitivity.

Microfluidics lends itself to automation and brings the added advantage of being very small in size. This small footprint (and similarly small cost) makes microfluidics attractive for point-of-care clinical assays. Such microfluidic assays could even displace gold-standard assays such as ­ELISA. In 2014, Ronald Davis and Mehdi Javanmard of Stanford University described a microfluidic device that could detect a human protein biomarker (cytokine interleukin 6), with the same sensitivity as ELISA but without its bulk.

More realistic models

The second challenge for assay automation is the read-out. That almost invariably means connecting what you want to measure, to a signal that can be read by a machine. Even early generations of automated assays used combinations of immobilisations using antibodies or oligonucleotides or, slightly later, optical read-outs from fluorescent probes. This is all fine for clinical assays but in the field of drug discovery attention is turning elsewhere. Pitifully few molecules discovered in this kind of assay end up as successful pharmaceuticals, leading many authors to argue for the merits of whole-animal phenotypic screens. Model genetic organisms, so the argument goes, are a much more realistic model of human disease than dishes of cells.

Deploying whole animals – even worms and fruit flies – in large-scale automatic screens, however, is not as easy as ­manipulating cells. And where the read-out is a behavioural change, as is the case if you were screening for drugs to treat anxiety, for instance, you have an even bigger technical challenge on your hands. But that has not deterred some experimenters. In 2013, Holly Richendrfer and Robbert Créton at Brown University described a simple protocol for screening zebrafish larvae (J Vis Exp. 4: e50622), while Ron Yu's team at the University of Kansas Medical Centre described a method for automating the analysis of rodent olfactory behaviours (PLoS ONE 9(4): e93468). Indeed, such is the interest in behavioural phenotyping that several companies offer complete, integrated behavioural analysis systems, such as Noldus' “EthoVision”. Frequent claims that these systems are high throughput are, however, unconvincing, given the scale of most modern industrial-scale automated assays. All the same, computer-vision based assays of behaviour has grown fast enough to coin the term, “ethomics”.

It is not only clinical laboratories that are catching on to assay automation. Advances in omic-scale techniques have shifted academic interest away from single-protein or even single-pathway hypothesis-making to a more holistic way of tackling biological questions. Genome- or proteome-scale probing means similarly large-scale assaying in the search for, say, interacting proteins. But all too often the speed at which some of these techniques can be done is not matched by the speed of the assays, leading to a major bottleneck in discovery. For example, experimental designs allow the rapid, systematic interference of an entire transcriptome but measuring the effects can take weeks or months.

Automation is the obvious answer but don't forget that it also takes time to develop an automated assay. Once made, it can be re-used but that only pays off if the assay you have automated is likely to be used repeatedly. Take the genetic control of microbial gene expression, for instance. To dissect the many regulatory control elements, it would be helpful to measure gene expression at the genome-wide scale at many different time points. To do this, Alistair Elfick of the University of Edinburgh developed an automated system that extracts RNA and analyses it in a high-throughput pipeline (SLAS Technol. 22(1):50-62). Edwin, as it is called, integrates a set of devices that takes care of plate inoculation and bacterial growth, extracts the RNA, checks its quality and analyses it. It is controlled by the enviably-named software suite, 'OverLord'.

New bottlenecks

Shifts in the way we are doing science are creating pressures for new fields of automation. In the field of cell culture, for example, the past few years has seen a clear shift away from 2D culture to 3D culture, from cells to organs. Peter Zehetmayer, Director of Sales and Marketing Life Sciences at DITABIS AG, thinks this is opening up a whole new need for automation. “I see a shift from two-dimensional cell cultures to spheroids and even whole animal, and this is creating new bottlenecks. It takes incredible patience to dispense thousands of zebrafish all in the right orientation in 96-well plates!”

Urban Liebel, the founder of ACQUIFER (a division of DITABIS AG), started the company because he saw this was the way culture was going. “The whole approach to automating this kind of imaging read-out is quite different,” argues Zehetmayer. “In 2D assays, all you have to do is to segment a few cells from the background and average the signal. With zebrafish you have only one fish per well and you need to do things, like tell if the liver is bigger or if the gut is developing normally. There is good 3D software out there but it needs to be adapted to high throughput.”


Urban Liebel, then at the KIT in Karlsruhe, and his colleagues developed an automated high content zebrafish screening. Photo: KIT

But Zehetmayer also points out an unexpected problem that automation has, unfortunately, brought along with it: the problem of data storage. “When you start analysing high resolution 3D cell cultures, you quickly generate terabytes of data a day. And remember, you have to store that data for ten years. One of my customers offered to copy a file onto my computer and it took two hours! Networks are just not capable of transferring that quantity of data.”

Lab robotics is cropping up in all sorts of places. It is even making an impact on neuroscience, traditionally regarded as one of the most 'hands-on' biological disciplines. The ground-breaking Allen Brain Atlas was made possible by using automated slicing, imaging and image analysis. Robotic devices are being used in positioning electrodes and even the green-fingered artisanal activity of patch clamping has been automated. Molecular Devices sell the IonFlux automated, patch-clamping machine, saving tedious hours trying to form gigaseals. For several years, Axon Instruments have been marketing OpusXpress, a system that injects Xenopus oocytes and then, a few days later, impales them with recording electrodes and performs a gene expression experiment.

In all these examples of automation, what the robot is doing is to recapitulate what the researcher does at the bench, and that rarely takes us beyond mere liquid handling. But lab automation is making its break away from just doing the hands-on drudge of assays and screens. Robotic automation is beginning to make its impact across the laboratory, not just at the bench, and it won't be long before you start seeing the impact even in the academic laboratory.

Self-writing lab book

Take lab books, for instance. Isn't it a bit of a pain having to pull off those gloves to write the last step into a paper book? Don't get me wrong, I am not about to wax lyrical about the virtues of e-lab books. That's yesterday's news.

No – today is the day of the self-writing lab book. Matthias Wille from the Ergonomics Federal Institute for Occupational Safety and Health, Dortmund, Germany, Philipp Scholl and Kristof Van Laerhoven, then at the Albert-Ludwigs University, Freiburg, now at the University of Siegen, Germany, have shown that a combination of Google Glass and a wrist-attached accelerometer can completely replace the lab book (DOI: 10.1145/2750858.2807547).

Don't bother printing out the experimental protocol – it is simply projected onto the glass, leaving the experimenter free to do the work undistracted. The experimenter can advance (or even modify) the protocol by speech alone. The wrist accelerometer records the movements of the ­experimenter's hand. The system takes a trace produced by the accelerometer and uses a Hidden Markov Model to infer, with astonishing accuracy, whether the experimenter was pouring, shaking, cutting or inverting and so on.

For Ross King at the University of Manchester, however, even this is just not automated enough. He argues that in some disciplines, at least, the entire process of hypothesis generation, experiment design, execution and analysis can, indeed should, be automated. King's contention is that we don't really understand something until we can build a machine that mimics it.

And King is not just a scifi dreamer. He has published several projects that practise just what he preaches. He has published projects, in which computers represent background knowledge using logic programming, use abductive reasoning (guessing at a theory to explain an observation – thanks, Wikipedia) to generate hypotheses, then use active learning and cost analysis to decide what the most informative experiment would be, then go on to actually do the experiment and interpret the results. He used this approach to make major inroads into understanding yeast metabolism.

Different angle

But not everyone thinks the right job for automation is to replace you and me. Lab Times spoke to Charles Fracchia, CEO and founder of Biobright, a company that takes a different approach to lab automation. “There has been a lot of movement in the field of automation but we come from a different angle,” says Fracchia. “We take the view that automation should be there to augment the human element, not replace it. Most of our staff come from a ­biological background and want to develop a much more intuitive and human-centred approach to automation.”


Charles Fracchia's company Biobright is working on voice-based assistant technologies to “augment” life scientists. Photo: MIT

Augmenting human intuition

Fracchia argues that an unexplored place for automation is that of augmenting human intuition. “Full automation will never happen in the biology laboratory because biological systems are inherently multidimensional. Automated assays, however efficient, can only provide linear improvements.”

Human intuition, however, finds itself at home roving around complexities of biological systems. “I remember a conversation I had with the CEO of a pipeline software company. They had struggled for months with a discovery problem of a large pharmaceutical company. An intern or possibly an undergraduate piped up, saying they had a hunch they should try an idea. When asked why, they couldn't point to anything specific, it was just an intuition. Within a few months, the problem was solved.”

Fracchia's company has been working on a voice-based assistant to help in the lab. It allows the scientist to take notes without taking their gloves off or even interacting with a terminal. “We have been working on software that takes data from all the computers in your lab and pulls together all the salient features,” Fracchia tells Lab Times. But what would it be for? “Say you are doing a protein purification. Maybe you have run an ÄKTA to get a response profile, before that you used a FACS, before that a water bath, before that a qPCR. All that data would get centralised automatically. You can then ask questions using normal language. Computer, what has my yield been this week?”

The rise of self-running labs, if ever it happens, will be like the arrival of self-driving cars. While we marvel at driverless cars, they remain, for the time being, a rarity. At the same time, automation of car-driving is creeping in by stealth. Automatic lane changers. Self-parking. It is the same with research, only here it is automatic plate-readers and self-writing lab books.

Perhaps that is as far as automation will ever get in the lab.





Last Changed: 26.04.2017




Information 4


Information 5