How to get the most out of automating your ELISA-like assays, dos and don’ts
Job Title: Senior Researcher for the High Throughput Antibody Discovery team
Philip is the senior researcher for the High Throughput Antibody Discovery team at Boehringer Ingelheim Pharmaceuticals Inc. He obtained his BS in Physiology and Neurobiology from the University of Connecticut and an MS degree in Biology from Southern Connecticut State University. He has held many roles within the Pharmaceutical and Biotechnology industries with focus on application development and support for research in genetics, immunotoxicology and immunology. Within the last 17 years, Philip has gained knowledge and experience in assay development, automation and automation integration. Since 2011, he has been expanding the use of automation for antibody discovery within BI, focusing on ELISA, AlphaLISA and MSD assay formats.
Bringing automation on any scale into your lab can be an intimidating prospect, especially for new users. There can be significant benefits to automating a method or process, but there are also many factors to consider and even more potential pitfalls. It is a common misconception that automation or integrated platforms have to be big, expensive and complex. There is also a common misconception that less complex assay formats don’t benefit from automation. Implementing automation can be as complex as large, walk-away high-throughput screening platforms or as simple as a plate reader and stacker, or any combination in between. Thoughtful planning and careful implementation are the two factors that determine the overall success of the process. Significant care should be taken in properly defining the scope and scale of any automation project. What do you hope to accomplish? Is it increased throughput? More consistency across assay runs? Allow scientists to focus on non-routine tasks? Automation can help provide all of these. Automating relatively simple assays that need to be done on a regular basis is a great way to free up time for scientists to focus on non-routine work while maintaining consistency across assays.
ELISA can be one of the easiest assay formats to develop and run. There are several general formats that can be suitable for a wide variety of analytical needs. The ubiquity of the method has given rise to additional assay methods that, while based on ELISA, provide significant benefits over standard ELISA. Technologies such as MSD, Simoa, Gyros and ELIspot have developmental roots in the ELISA assay, but have leveraged leading edge technology. As a result these methods allow for lower sample volumes, significantly increased sensitivity, multiplexing, larger dynamic range and reduced background, just to name a few. Automation of these assays can provide all of the benefits previously discussed.
This session is no longer taking submissions. To be notified of new Ask the Expert sessions, join our mailing list.
What are the differences between ELISA and “ELISA-like” assays?
The acronym ELISA stands for Enzyme Linked Immunosorbant Assay. Two descriptive facets in that name are Enzyme Linked and Immunosorbant. In a standard ELISA a protein, either antibody or antigen, is coated, or absorbed, onto specially coated plates. The last step in the assay is detection. This is done by the converting of a substrate, such as ABTS, into a colored product (chromogenic detection) by an enzyme that is conjugated to the detection antibody. The one most are familiar with is Horseradish Peroxidase or HRP. The HRP enzyme converts the substrate into a colored product whose absorbance is then measured.
ELISA-like assay formats, as I refer to them, have the development and procedural roots in the standard ELISA, but are not ELISA by the strictest definitions. Even a fluorescent ELISA is not technically and ELISA since the detection signal is fluorescent and not the result of an enzyme converting a substrate. When I say procedural roots of an ELISA, I refer to the steps of an assay. MSD assays use a chemiluminescent detection but the assay steps are very similar to an ELISA. It is the serial addition of reagents/samples separated by wash steps that I refer to when I describe an assay format as ELISA-like.
How long does it take to get an automated system up and running? Months, weeks?
Well, the answer to that question would depend greatly on the size and complexity of the automated system. If you are referring to something simple like a plate stacker attached to a plate reader, it would say hours. As you get into larger and more complex systems then the time needed to get them running and validated would increase. There are several considerations to be mindful of. For example, if you are integrating a piece of equipment that has never been integrated with the controller software that you are using, you might discover some problems that can extend the installation and validation timelines. It has been my experience with the small-to-medium sized, fully automated platforms that I have built that it can easily take a couple weeks to get everything running smoothly. Larger, more complex systems can take a few months to set-up and validate.
How much of the assay process can be automated and what steps remain manual even with automation?
As much or as little of the assay process as you desire can be automated. Automated systems designed specifically for ELISA are quite common. The only thing the user would need to do is add the all of the labware needed to the starting positions and fill any reagent reservoirs or carboys, start the protocol and analyze the data when it’s finished. How much of a process end users chose to automate is generally defined by need, expertise and available funding. My lab takes a hybrid approach. We don’t run ELISAs enough to justify a fully automated platform, but do run them in 384-well format. We have 2-384 channel liquid handlers for sample and reagent transfers as well as a plate washer with attached plate stacker. We also have a plate stacker attached to the plate reader. There are some manual intervention steps required, but it is not all manual. We do run a lot of MSD assays and have a fully automated platform for that method.
What did you find was the biggest benefit in automating your assay process? What was the biggest challenge?
In my opinion, there are 2 big benefits, consistency in your assay process and time management of the scientists. Running assays on a properly designed and validated system provides a level of consistency that is very difficult to duplicate with manual pipetting. In an automated environment you know that every plate was handled exactly the same way as every other plate so the data, even over long periods of time, are directly comparable. Automating assay work also allows scientists to focus their attention on non-routine experiments or tasks while an assay is running in the background, effectively acting as a ‘force multiplier’.
The biggest challenge that I have experienced, especially when working with novice users, is trying to change the way some scientists think about assay development and validation in the context of automated systems. If you work with automation regularly, you have to be able to think about resource allocation within an automated system as it pertains to your assay workflow. One small change in process in an automated workflow can significantly affect the time it takes to complete. Changing reagent addition steps, or which component will do which steps requires a type of fluid thinking that can be difficult for some scientists to grasp.
Have you seen differences in the data you generate using automation vs. manually? Can you explain?
I don’t generally see differences in assay data in a manual vs. automated comparison. What shows up positive in an automated assay should show up positive in a manual assay, so to speak. The differences that we see are usually in the quality of the data. Replicates on a plate show lower %CV in an automated workflow when compared to a manual workflow, as well as replicates across plates. This is assuming that liquid handling methods have been properly optimized. A poorly optimized liquid handling protocol can, and usually will, produce very poor quality data.
What optimization steps did you conduct when starting your automated processing?
When optimizing an automated method there are 3 questions you should be asking, in my opinion:
- Are all of the components functioning properly? – This is usually reserved for the liquid handling part of a system. Accurate QC and preventative maintenance are key. You should run liquid handling QC on your instrument at least twice a year. This can be something as simple as dispensing a fluorophore from a reservoir to an assay and reading it to ensure equal signal across the whole plate. Additional steps can be taken to assess relative inaccuracy, such as, if I program a 5 µL dispense, am I getting 5 µL? Any preventative maintenance or calibration that can be done on components should be done and done regularly.
- Are the steps being performed correctly, as programmed? – Check and double check your method steps. Ensure that the right samples are going into the right wells; the right source is going to the right destination; the right plates are being used at the right time. I usually accomplish this by setting up several ‘wet runs’ with several different colored water batches. I will run a single plate to ensure that the steps are being done correctly and in the expected order. I’ll then increase the number of plates being run to see the effect on the overall time-to-completion.
- Is this the most efficient resource allocation? – Resource allocation is the overall driver of method duration in an automated environment. If you write your method in such a way as to have one piece of equipment doing the majority of the tasks, then your method is going to take a long time. Be open to changing the order of steps. A simple change like this may greatly affect your allocation time of a particular resource, and by extension, the overall length of you method run. Also, be open to bracketing time points. If you define a 1 hour incubation +/- 5 minutes and end up with an unacceptable protocol length, try varying the +/- time value. If you set limits in a method step, the software will obey those limits. If it can’t fit a step into that limit it will wait until it can, extending the overall duration of your protocol. It may happen that increasing that value to +/- 7 minutes may be all the adjustment needed.
I am wondering how the assay transfer works. How difficult is it to transfer and how do you validate after automation?
The term “assay transfer”, in this context, simply means moving an assay from the bench (manually completed) to being run on some form of automation. This assumes that the assay has already been properly developed and validated, if appropriate. Ideally, when performing an assay transfer to an automated platform you would compare the values of your control samples, be it EC/IC50, Signal-to-Noise ratio or some other quantitative measure. They are run in a manual assay and in an automated assay. If everything has been done correctly those values should be equivalent, within a reasonable %CV.
Have you run different sets of ELISAs? How do you run different configurations so you can do mulitple different assays
Yes, I have run several different “ELISA-like” assay formats on our current system. Making sure that you can run different formats on an automated system is part of the “scale and scope” portion of the platform design phase. If you are designing a system to handle ELISA work it would be prudent to equip that platform for the most complex assay configuration that you will think you will be handling; the most reagent additions, the greatest number of plates, etc. If you plan out your configuration for the most complex and resource intensive assay you are going to be doing then less complicated assay formats are easily handled by the same platform. This will ensure that you have the correct components and enough capacity for other configurations. Keep in mind, changing the configuration of an existing system or adding components is not a trivial exercise, and it can become costly.
No one in our lab has automation experience, how much of a problem would this be for automating our ELISAs. What is the hardest part of training?
I have encountered situations just like this quite a few times over the years. It can be intimidating, but it is very much an achievable goal. The first thing you need to consider is the scale and scope of what you want to automate. How many plates? How often? What format of ELISA? How many different formats? Are you working in 96-well or 384-well, or both? Do you have a preference for any of the other components you may need? A specific plate reader, plate washer or dispenser? Define as many of the parameters as you can and then you can reach out to vendors. If you decide that all you need is a couple of plate stackers to attach to a plate washer and plate reader, that is easily accomplished. If you decide that you need a higher degree of automation then there are several vendors that you should consider contacting. The five most well known vendors for integrated systems are Thermo, Beckman Coulter, High Res, Hamilton and Tecan. They all provide liquid handlers and/or integration controller software, referred to as “scheduling software”. Contact several vendors to get quotes and designs and then you can decide which fits your lab’s needs and budget. Do not commit to anything unless you are sure it will meet your needs and budget.
Almost every instrument vendor includes training with purchase. This is also true for integrators. Take advantage of all of the training offered. Be up-front with your Sales Rep about the lack of automation experience within your lab. Most companies can also offer more extensive training for an additonal fee. Bring the vendor’s software specialist into your lab and learn as much as you can from them. Get as much time in front of the system as you can while the vendor specialists are available to answer any questions. If there are other individuals at your site, perhaps in another lab or department that have automation experience, seek them out. I am approached quite often by scientists from different labs and departments to help with automation issues and I am always more than happy to share what I have learned over the years.
We are seeing poor dynamic range for our standard curve, could automation improve this?
As a general rule, automation will not improve your assay in that fashion. It can reduce variability and tighten %CV, but it is not going to make a mediocre reference molecule better. I would focus my efforts into the assay design/development in this situation. Sensitivity, dynamic range, LOD and LLOQ are functions of the assay design, not the automation. Perhaps there is alternative standard that can be used, or even a surrogate. Perhaps altering some assay conditions can help extend you dynamic range. Don’t be afraid to try some things that you wouldn’t normally do.
Is imaging part of your automated system?
No, we do not have any imaging instruments as part of our automated platforms. We had one a few years ago, but have since discontinued its use due to changes in our overall process. It is completely possible to integrate imagers into an automation platform as long as the instrument that you are using is “automation friendly”. This usually involves an access point to the instrument that is unobstructed, a plate nest that slides out away from the body of the instrument, for example. Your plate transport component would have to be able to access that input/output point without obstruction. If you are considering an automation platform that integrates a high content imager, or any other imager, the best place to start is your sales rep for the imager and the vendor that you have selected as your integrator. The integrator should be able to tell you immediately if they have previously integrated your imaging instrument in another system. If they have, that means that it is automation compatible, it is compatible with their controller software and the required APIs have already been written.
Have you noticed a variability decrease with automation?
Generally, yes. If protocols are correctly written and validated you should see a decrease in variation within replicates of an assay. If a liquid handling is poorly written or not properly validated, the opposite could just as easily be true. Nothing can generate poor data as well as a poorly written liquid handling protocol.
If you automate incubation how do you account for well temperature variation with stacked plates?
In my experience, any kind of temperature dependent incubation (incubate for X minutes at Y degrees) done on an automated platform is done in racks. There are a wide variety of incubators/refrigerators/freezers that are designed specifically to be used with automation platforms. They will have an internal plate mover that will retrieve a plate and present to a transfer station on the outside of the incubator. They will also retrieve a plate from that transfer station and store within the incubator. Instead of stacking plates on top of another, each plate has a slot in a rack that can be accessed by the plate transport component. This allows air flow between the plates to reduce temperature variation across any one particular plate. It also allows the system to retrieve any plate at any time without having to move other plates out of the way. This is referred to as “Random Access Storage”. You can order racks for a variety of plate heights to maximize storage capacity. So if you are using standard microplates or deep-well plates all you have to do is order the correct rack. This is also a feature within the instrument configuration that is usually pretty easy to change, allowing for additional flexibility.