Most of programs use the same
parameter file. See BLUPF90 manual for details on this file. Some programs have
restrictions on parameter files and some programs use optional parameters.
Details on these can be find in Readme* files in
directories of specific programs.
Below are comments on
specific programs and descriptions of some options.
BLUP
BLUPF90 calculates BLUP with
3 solvers. PCG is the default solver and is usually the fastest one. There is
an option in the PCG module to use the block preconditioner at a cost of higher
memory requirements.
Solving with SOR would require less memory but
usually would converge slower. In pathological cases SOR may be more reliable
as it does not have an error accumulation properties
of PCG.
FSPAK is usually the most
accurate method and usually uses the most memory.
With SOR, one can convert the
program to using single precision (rh=r4) and use
only half of memory.
Variance component estimation
REMLF90 uses EM REML. For most
problems it is the most reliable algorithm but can take hundreds of rounds of
iteration. REMLF90 was found to have problems converging with random
regression models. In this case, using parameter files that are too large than
too small usually helps.
AIREMLF90 usually converges
much faster but sometimes does not converge. Very slow convergence usually
indicates that the model is over parameterized and there is insufficient
information to estimate some variances.
AIREMLRES is a version of
AIREMLF90 that implements heterogeneous residuals. It may not be upto date with fixes as AIREMLF90.
GIBBSF90 is a simple Gibbs
sampler. It is very easy to change but is slow as it recreates LHS and RHS
every round. Use postgibbsf90 to analyzes samples from
this and other Gibbs samplers. In practical cases, results from Gibbs samplers
and REML are similar. One would one or the other based on computing
feasibility. If there are large differences beyond sampling errors, this
indicates problems usually with the Gibbs sampler. Try longer chains or
different priors.
Gibbs samplers may be slow to
achieve convergence is initial values are far away from those at convergence,
e.g., 100 times too low or too high. Before using more complicated models,
Karin Meyer advocates using a series of simpler models.
GIBBS1F90 is usually much
faster than GIBBSF90 because LHS is stored only once and separate from relationship
matrices. Successful analyses were made with over 20 traits. However, if models
are different per trait, the lines due to effects need to be modified. Also, with too many differences in models
among traits, the program becomes increasingly slower.
GIBBS2F90 adds joint sampling
of correlated effects. This results in faster mixing with random regression and
maternal models.
GIBBSF90 adds estimation of
heterogeneous residual covariances in classes. The computing costs usually
increase with the number of classes.