Economic Design of a Limiting Dilution Assay

Ian C. McKay, 23 February 1998

** **

**Summary**

** **

A limiting dilution assay is one in which serial dilutions of the substance to be assayed are each inoculated into several wells (or tubes or animals), and the number of wells (or tubes or animals) that show a particular effect (e.g. death, or infection) are counted. This paper discusses how to achieve an optimal balance between the number of different dilutions and the number of wells (or tubes or animals) inoculated per dilution. The aim is to achieve the smallest possible standard errors while keeping the total number of inocula within strict limits.

The main conclusion is that the best design of an assay is one in which the number of replicate wells per dilution is about equal to the number of different dilutions. It is also shown that in order to cut the standard errors by half it would be necessary to increase the total number of culture wells almost four-fold.

**The problem**

** **

Let us assume that you know from experience what range of
dilution factors need to be covered in order to obtain all infection rates from
zero to 100 per cent. Let us also assume that economic constraints prevent you
from using more than *N* culture wells
in total for each assay. Then there will still be room for doubt about whether
it is best to use closely spaced dilutions (e.g. 3-fold serial dilutions) with
relatively few replicates per well, or more widely spaced dilutions (e.g.
10-fold serial dilutions) with a larger number of replicates per well.

**The chosen criterion**

** **

The rather simplified criterion that I will use here is to aim for the smallest possible estimated standard error, as calculated by the formula

_{} equation 1

in which

· *d* is the logarithm of the ratio of consecutive
dilutions (e.g. *d*=0.699 if we are
using 5-fold consecutive dilutions);

· *n* = the number of wells inoculated at
each dilution (often 8).

· *r* = the total number of wells that
become infected, counting *all* the
dilutions used.

· *r*_{1} is the number of wells infected at
the first dilution, *r*_{2} is the number
infected at the next dilution and so on.

**The mathematical
model**

Let us consider how changes to *d* or *n* will affect the
value of *SE*. Suppose we were to
number our tubes used in the serial dilution 1, 2, 3 . . . and plot a graph
showing how the number of infected wells varies with tube number.

The area under this graph is approximately equal to the sum
of the component areas created by joining up the points as shown above. Simple
geometry tells us that this area is *r*
- ½*r*_{1} , which will almost
always be the same as *r* - ½*n*. It is also clear that the area will
expand in direct proportion to *n* and
in inverse proportion to *d*, so we can
say that

_{} equation
2,

where *k*_{1}
is a constant of proportionality.

By plotting a similar graph showing how the *square* of the number of infected wells
varies with tube number and using the same kind of proportionality argument we
can show that

_{} equation
3,

where *k*_{2}
is another constant of proportionality.

If we now use equations 1 and 2 to substitute for *r* and for _{} in equation 1, we get

_{}

which shows us how *SE*
will be related to *d* and *n*, but not quite how it will relate to
total cost.

** **

**The Cost of Precision**

** **

The cost of preparing the cultures and reading the results
will be roughly proportional to the total number *N* of wells used, which will be *n*
´ *m*, where *m* is the number of different dilutions used. Now, to cover the
necessary range of dilutions, the number of dilution intervals required will be
inversely proportional to *d*, and the
number of dilutions used will be the number of intervals plus 1. Therefore we
can write another proportionality relation, namely _{}. If we use this relationship to substitute for *d* in the above formula for *SE*, we get

**The Result**

_{}

**The Implications**

** **

It can be seen by simple calculation from the above formula
(or by differential calculus) that if we fix the value of *n* ´
*m* by economic constraints, the
smallest possible value of *SE* will be
obtained by making *n* = *m*. In other words, the number of
replicates per dilution should be about equal to the number of different
dilutions.

It is easy to show by calculation that moderate departures
from this ideal will have only a small detrimental effect on *SE*: it is only when there is a gross
departure from the *n* = *m* rule that the *SE* begins to expand substantially.
For example, consider an assay in which we can afford to use a total of
64 wells. We could use 8 dilutions, each with 8 replicates and get a standard
error *SE*_{1}, or we could use
16 dilutions, each with 4 replicates and get *SE*_{2}, or we could use 32 dilutions, each with 2
replicates and get *SE*_{3}.
The above formula tells us that the values of *SE*_{1}, *SE*_{2},
*SE*_{3}, will be in the ratio
1 : 1.043 : 1.257. So the second experimental design is only a little poorer
than the first. It is not until we get to extremes that the *SE* becomes large enough to be noticeably
wasteful.

What if we are just not satisfied with the *SE* that we get, even when we have made *n = m* ? Then the above formula tells us
that in order to cut the *SE* by half
we would need to increase (*n* - 1)(*m* - 1) four-fold. Since (*n* - 1)(*m* - 1) = *nm* - *n - m *+1, and *nm* is usually by far the largest term in this expression, we can
see that the only way to have much impact on SE, apart from making *n* = *m*,
is to increase the total number of wells very substantially. In effect, to
reduce *SE* by a factor *f* we need to multiply the total number
of wells *N* by about *f*^{2}.

This relationship enables us to attach an economic cost to
any departures from the *n = m* rule.
In the case of 16 dilutions, each with 4 replicates, we could compensate for
the poor design by increasing *N* by a
mere 9 per cent, but in the case of 32 dilutions, each with 2 replicates, we
could only compensate for the poor design by increasing *N* by about 58 per cent. Obviously, it would be much cheaper just to
use a near-optimal design in the first place.