The KLD is more or less a measure of how much information is
lost when an approximation is used to replace an actual probability
distribution.
How you calculate it depends on whether you are considering
discrete or continuous values for the distribution.
If you have discrete values,
KLD = Σ P(i) log [P(i)/Q(i)] (summing over the values of i)
where P(i) is the "true" distribution and Q(i) a corresponding
approximation.
If you have a continuous function for the probability, i.e. the
variable can assume any value over a certain range (usually with
different probability density for different values since uniform
probability is a pretty boring problem)
KLD = ∫ p(x)log[p(x)/q(x)] dx (integrated from -∞ to +∞)
where p(x) is the true function of the probability - the
"density" of P, and q(x) is the approximated function of the
probability - the "density" of Q.
Note that these formulas only hold for a single variable. More
complex formulas are required to calculate the KLD for
multi-variable distributions.