```

Geometry Searches and Internal Coordinates

Stationary points are places on the potential energy
surface with a zero gradient vector (first derivative of
the energy with respect to nuclear coordinates).  These
include minima (whether relative or global), better known
to chemists as reactants, products, and intermediates; as
well as transition states (also known as saddle points).

The two types of stationary points have a precise
mathematical definition, depending on the curvature of the
potential energy surface at these points.  If all of the
eigenvalues of the hessian matrix (second derivative
of the energy with respect to nuclear coordinates) are
positive, the stationary point is a minimum.  If there is
one, and only one, negative curvature, the stationary
point is a transition state.  Points with more than one
negative curvature do exist, but are not important in
chemistry.  Because vibrational frequencies are basically
the square roots of the curvatures, a minimum has all
real frequencies, and a saddle point has one imaginary
vibrational "frequency".

GAMESS locates minima by geometry optimization, as
RUNTYP=OPTIMIZE, and transition states by saddle point
searches, as RUNTYP=SADPOINT.  In many ways these are
similar, and in fact nearly identical FORTRAN code is used
for both.  The term "geometry search" is used here to
describe features which are common to both procedures.
The input to control both RUNTYPs is found in the \$STATPT
group.

As will be noted in the symmetry section below, an
OPTIMIZE run does not always find a minimum, and a
SADPOINT run may not find a transtion state, even though
the gradient is brought to zero.  You can prove you have
located a minimum or saddle point only by examining the
local curvatures of the potential energy surface.  This
can be done by following the geometry search with a
RUNTYP=HESSIAN job, which should be a matter of routine.

quasi-Newton Searches

Geometry searches are most effectively done by what is
called a quasi-Newton-Raphson procedure.  These methods
assume a quadratic potential surface, and require the
exact gradient vector and an approximation to the hessian.
It is the approximate nature of the hessian that makes the
method "quasi".  The rate of convergence of the geometry
search depends on how quadratic the real surface is, and
the quality of the hessian.  The latter is something you
have control over, and is discussed in the next section.

GAMESS contains different implementations of quasi-
Newton procedures for finding stationary points, namely
METHOD=NR, RFO, QA, and the seldom used SCHLEGEL.  They
differ primarily in how the step size and direction are
controlled, and how the Hessian is updated.  The CONOPT
method is a way of forcing a geometry away from a minimum
towards a TS.  It is not a quasi-Newton method, and is
described at the very end of this section.

The NR method employs a straight Newton-Raphson step.
There is no step size control, the algorithm will simply
try to locate the nearest stationary point, which may be a
minimum, a TS, or any higher order saddle point.  NR is
not intended for general use, but is used by GAMESS in
connection with some of the other methods after they have
homed in on a stationary point, and by Gradient Extremal
runs where location of higher order saddle points is
common.  NR requires a very good estimate of the geometry
in order to converge on the desired stationary point.

The RFO and QA methods are two different versions of
the so-called augmented Hessian techniques.  They both
employ Hessian shift parameter(s) in order to control the
step length and direction.

In the RFO method, the shift parameter is determined by
approximating the PES with a Rational Function, instead of
a second order Taylor expansion.  For a RUNTYP=SADPOINT,
the TS direction is treated separately, giving two shift
parameters.  This is known as a Partitioned Rational
Function Optimization (P-RFO).  The shift parameter(s)
ensure that the augmented Hessian has the correct eigen-
value structure, all positive for a minimum search, and
one negative eigenvalue for a TS search.  The (P)-RFO step
can have any length, but if it exceeds DXMAX, the step is
simply scaled down.

In the QA (Quadratic Approximation) method, the shift
parameter is determined by the requirement that the step
size should equal DXMAX.  There is only one shift
parameter for both minima and TS searches.  Again the
augmented Hessian will have the correct structure.  There
is another way of describing the same algorithm, namely as
a minimization on the "image" potential.  The latter is
known as TRIM (Trust Radius Image Minimization).  The
working equation is identical in these two methods.

When the RFO steplength is close to DXMAX, there is
little difference between the RFO and QA methods.  However,
the RFO step may in some cases exceed DXMAX significantly,
and a simple scaling of the step will usually not produce
the best direction.  The QA step is the best step on the
hypersphere with radius DXMAX.  For this reason QA is the
default algorithm.

Near a stationary point the straight NR algorithm is
the most efficient.  The RFO and QA may be viewed as
methods for guiding the search in the "correct" direction
when starting far from the stationary point.  Once the
stationary point is approached, the RFO and QA methods
switch to NR, automatically, when the NR steplength drops
below 0.10 or DXMAX, whichever is the smallest.

The QA method works so well that we use it exclusively,
and so the SCHLEGEL method will probably be omitted from
some future version of GAMESS.

You should read the papers mentioned below in order to
understand how these methods are designed to work.  The
first 3 papers describe the RFO and TRIM/QA algorithms.  A
good but slightly dated summary of search procedures is
Schlegel.  Most of the FORTRAN code for geometry searches,
and some of the discussion in this section was written by
Frank Jensen of the University of Aarhus, whose paper
compares many of the algorithms implemented in GAMESS:

1. J.Baker  J.Comput.Chem. 7, 385-395(1986)
2. T.Helgaker  Chem.Phys.Lett. 182, 503-510(1991)
3. P.Culot, G.Dive, V.H.Nguyen, J.M.Ghuysen
Theoret.Chim.Acta  82, 189-205(1992)
4. H.B.Schlegel  J.Comput.Chem. 3, 214-218(1982)
5. S.Bell, J.S.Crighton
J.Chem.Phys. 80, 2464-2475(1984).
6. H.B.Schlegel  Advances in Chemical Physics (Ab Initio
Methods in Quantum Chemistry, Part I), volume 67,
K.P.Lawley, Ed.  Wiley, New York, 1987, pp 249-286.
7. F.Jensen  J.Chem.Phys. 102, 6706-6718(1995).

the nuclear Hessian

Although quasi-Newton methods require only an
approximation to the true hessian, the quality of this
matrix has a great affect on convergence of the geometry
search.

There is a procedure contained within GAMESS for
guessing a positive definite hessian matrix, HESS=GUESS.
If you are using Cartesian coordinates, the guess hessian
is based on pairwise atom stretches.  The guess is more
sophisticated when internal coordinates are defined, as
empirical rules will be used to estimate stretching and
bending force constants.  Other angular force constants are
set to 1/4.  The guess often works well for minima, but
cannot possibly find transition states (because it is
positive definite).  Therefore, GUESS may not be selected

Two options for providing a more accurate hessian are
HESS=READ and CALC.  For the latter, the true hessian is
obtained by direct calculation at the initial geometry, and
then the geometry search begins, all in one run.  The READ
option allows you to feed in the hessian in a \$HESS group,
as obtained by a RUNTYP=HESSIAN job.  The second procedure
is actually preferable, as you get a chance to see the
frequencies.  Then, if the local curvatures look good, you
can commit to the geometry search.  Be sure to include a
HESS=READ job so that GAMESS can take its first step
immediately.

Note also that you can compute the hessian at a lower
basis set and/or wavefunction level, and read it into a
higher level geometry search.  In fact, the \$HESS group
could be obtained at the semiempirical level.  This trick
works because the hessian is 3Nx3N for N atoms, no matter
what atomic basis is used.  The gradient from the lower
level is of course worthless, as the geometry search must
work with the exact gradient of the wavefunction and basis
level calculation!

You often get what you pay for.  HESS=GUESS is free, but
may lead to significantly more steps in the geometry
search.  The other two options are more expensive at the
beginning, but may pay back by rapid convergence to the
stationary point.

The hessian update frequently improves the hessian for a
few steps (especially for HESS=GUESS), but then breaks
down.  The symptoms are a nice lowering of the energy or
the RMS gradient for maybe 10 steps, followed by crazy
steps.  You can help by putting the best coordinates into
\$DATA, and resubmitting, to make a fresh determination of
the hessian.

The default hessian update for OPTIMIZE runs is BFGS,
which is likely to remain positive definite.  The POWELL
update is the default for SADPOINT runs, since the hessian
can develop a negative curvature as the search progresses.
The POWELL update is also used by the METHOD=NR and CONOPT
since the Hessian may have any number of negative
eigenvalues in these cases.  The MSP update is a mixture of
Murtagh-Sargent and Powell, suggested by Josep Bofill,
(J.Comput.Chem., 15, 1-11, 1994).  It sometimes works
slightly better than Powell, so you may want to try it.

coordinate choices

Optimization in cartesian coordinates has a reputation
of converging slowly.  This is largely due to the fact
that translations and rotations are usually left in the
problem.  Numerical problems caused by the small eigen-
values associated with these degrees of freedom are the
source of this poor convergence.  The methods in GAMESS
project the hessian matrix to eliminate these degrees of
freedom, which should not cause a problem.  Nonetheless,
Cartesian coordinates are in general the most slowly
convergent coordinate system.

The use of internal coordinates (see NZVAR in \$CONTRL
as well as \$ZMAT) also eliminates the six rotational and
translational degrees of freedom.  Also, when internal
coordinates are used, the GUESS hessian is able to use
empirical information about bond stretches and bends.
On the other hand, there are many possible choices for the
internal coordinates, and some of these may lead to much
poorer convergence of the geometry search than others.
Particularly poorly chosen coordinates may not even
correspond to a quadratic surface, thereby ending all hope
that a quasi-Newton method will converge.

Internal coordinates are frequently strongly coupled.
Because of this, Jerry Boatz has called them "infernal
coordinates"!  A very common example to illustrate this
might be a bond length in a ring, and the angle on the
opposite side of the ring.  Clearly, changing one changes
the other simultaneously.  A more mathematical definition
of "coupled" is to say that there is a large off-diagonal
element in the hessian.  In this case convergence may be
unsatisfactory, especially with a diagonal GUESS hessian,
where a "good" set of internals is one with a diagonally
dominant hessian.  Of course, if you provide an accurately
computed hessian, it will have large off-diagonal values
where those are truly present.  Even so, convergence may
be poor if the coordinates are coupled through large 3rd
or higher derivatives.  The best coordinates are therefore
those which are the most "quadratic".

One very popular set of internal coordinates is the
usual "model builder" Z-matrix input, where for N atoms,
one uses N-1 bond lengths, N-2 bond angles, and N-3 bond
torsions.  The popularity of this choice is based on its
ease of use in specifying the initial molecular geometry.
Typically, however, it is the worst possible choice of
internal coordinates, and in the case of rings, is not
even as good as Cartesian coordinates.

However, GAMESS does not require this particular mix
of the common types.  GAMESS' only requirement is that you
use a total of 3N-6 coordinates, chosen from these 3 basic
types, or several more exotic possibilities.  (Of course,
we mean 3N-5 throughout for linear molecules).  These
additional types of internal coordinates include linear
bends for 3 collinear atoms, out of plane bends, and so on.
There is no reason at all why you should place yourself in
a straightjacket of N-1 bonds, N-2 angles, and N-3
torsions.
If the molecule has symmetry, be sure to use internals
which are symmetrically related.

For example, the most effective choice of coordinates
for the atoms in a four membered ring is to define all
four sides, any one of the internal angles, and a dihedral
defining the ring pucker.  For a six membered ring, the
best coordinates seem to be 6 sides, 3 angles, and 3
torsions.  The angles should be every other internal
angle, so that the molecule can "breathe" freely.  The
torsions should be arranged so that the central bond of
each is placed on alternating bonds of the ring, as if
they were pi bonds in Kekule benzene.  For a five membered
ring, we suggest all 5 sides, 2 internal angles, again
alternating every other one, and 2 dihedrals to fill in.
The internal angles of necessity skip two atoms where the
ring closes.  Larger rings should generalize on the idea
of using all sides but only alternating angles.  If there
alternate angles as you go around from this position.

Rings and more especially fused rings can be tricky.
For these systems, especially, we suggest the Cadillac of
internal coordinates, the "natural internal coordinates"
of Peter Pulay.  For a description of these, see

P.Pulay, G.Fogarosi, F.Pang, J.E.Boggs,
J.Am.Chem.Soc. 101, 2550-2560 (1979).
G.Fogarasi, X.Zhou, P.W.Taylor, P.Pulay
J.Am.Chem.Soc. 114, 8191-8201 (1992).

These are linear combinations of local coordinates, except
in the case of rings.  The examples given in these two
papers are very thorough.

An illustration of these types of coordinates is given
in the example job EXAM25.INP, distributed with GAMESS.
This is a nonsense molecule, designed to show many kinds
of functional groups.  It is defined using standard bond
distances with a classical Z-matrix input, and the angles
in the ring are adjusted so that the starting value of
the unclosed OO bond is also a standard value.

Using Cartesian coordinates is easiest, but takes a very
large number of steps to converge.  This however, is better
than using the classical Z-matrix internals given in \$DATA,
which is accomplished by setting NZVAR to the correct 3N-6
value.  The geometry search changes the OO bond length to
a very short value on the 1st step, and the SCF fails to
converge.  (Note that if you have used dummy atoms in the
\$DATA input, you cannot simply enter NZVAR to optimize in
internal coordinates, instead you must give a \$ZMAT which
involves only real atoms).

The third choice of internal coordinates is the best set
which can be made from the simple coordinates.  It follows
the advice given above for five membered rings, and because
it includes the OO bond, has no trouble with crashing this
bond.  It takes 20 steps to converge, so the trouble of
generating this \$ZMAT is certainly worth it compared to the
use of Cartesians.

Natural internal coordinates are defined in the final
group of input.  The coordinates are set up first for the
ring, including two linear combinations of all angles and
all torsions withing the ring.  After this the methyl is
hooked to the ring as if it were a NH group, using the
usual terminal methyl hydrogen definitions.  The H is
hooked to this same ring carbon as if it were a methine.
The NH and the CH2 within the ring follow Pulay's rules
exactly.  The amount of input is much greater than a normal
Z-matrix.  For example, 46 internal coordinates are given,
which are then placed in 3N-6=33 linear combinations.  Note
that natural internals tend to be rich in bends, and short
on torsions.

The energy results for the three coordinate systems
which converge are as follows:

NSERCH    Cart          good Z-mat        nat. int.
0   -48.6594935049   -48.6594935049   -48.6594935049
1   -48.6800538676   -48.6806631261   -48.6838361406
2   -48.6822702585   -48.6510215698   -48.6874045449
3   -48.6858299354   -48.6882945647   -48.6932811528
4   -48.6881499412   -48.6849667085   -48.6946836332
5   -48.6890226067   -48.6911899936   -48.6959800274
6   -48.6898261650   -48.6878047907   -48.6973821465
7   -48.6901936624   -48.6930608185   -48.6987652146
8   -48.6905304889   -48.6940607117   -48.6996366016
9   -48.6908626791   -48.6949137185   -48.7006656309
10   -48.6914279465   -48.6963767038   -48.7017273728
11   -48.6921521142   -48.6986608776   -48.7021504975
12   -48.6931136707   -48.7007305310   -48.7022405019
13   -48.6940437619   -48.7016095285   -48.7022548935
14   -48.6949546487   -48.7021531692   -48.7022569328
15   -48.6961698826   -48.7022080183   -48.7022570260
16   -48.6973813002   -48.7022454522   -48.7022570662
17   -48.6984850655   -48.7022492840
18   -48.6991553826   -48.7022503853
19   -48.6996239136   -48.7022507037
20   -48.7002269303   -48.7022508393
21   -48.7005379631
22   -48.7008387759
...
50   -48.7022519950

from which you can see that the natural internals are
actually the best set.  The \$ZMAT exhibits upward burps
in the energy at step 2, 4, and 6, so that for the
same number of steps, these coordinates are always at a
higher energy than the natural internals.

The initial hessian generated for these three columns
contains 0, 33, and 46 force constants.  This assists
the natural internals, but is not the major reason for
its superior performance.  The computed hessian at the
final geometry of this molecule, when transformed into the
natural internal coordinates is almost diagonal.  This
almost complete uncoupling of coordinates is what makes
the natural internals perform so well.  The conclusion
is of course that not all coordinate systems are equal,
and natural internals are the best.  As another example,
we have run the ATCHCP molecule, which is a popular
geometry optimization test, due to its two fused rings:

H.B.Schlegel, Int.J.Quantum Chem., Symp. 26, 253-264(1992)
T.H.Fischer and J.Almlof, J.Phys.Chem. 96, 9768-9774(1992)
J.Baker, J.Comput.Chem. 14, 1085-1100(1993)

Here we have compared the same coordinate types, using a
guess hessian, or a computed hessian.  The latter set of
runs is a test of the coordinates only, as the initial
hessian information is identical.  The results show clearly
the superiority of the natural internals, which like the
previous example, give an energy decrease on every step:

Cartesians               65          41 steps
good Z-matrix            32          23
natural internals        24          13

A final example is phosphinoazasilatrane, with three rings
fused on a common SiN bond, in which 112 steps in Cartesian
space became 32 steps in natural internals.  The moral is:

"A little brain time can save a lot of CPU time."

In late 1998, a new kind of internal coordinate method
was included into GAMESS.  This is the delocalized
internal
coordinate (DLC) of
J.Baker, A. Kessi, B.Delley
J.Chem.Phys. 105, 192-212(1996)
although as is the usual case, the implementation is not
exactly the same.  Bonds are kept as independent
coordinates,
while angles are placed in linear combination by the DLC
process.  There are some interesting options for applying
constraints, and other options to assist the automatic DLC
generation code by either adding or deleting coordinates.
It is simple to use DLCs in their most basic form:
\$contrl nzvar=xx \$end
\$zmat   dlc=.true. auto=.true. \$end
Our initial experience is that the quality of DLCs is
not as good as explicitly constructed natural internals,
which benefit from human chemical knowledge, but are almost
always better than carefully crafted \$ZMATs using only the
primitive internal coordinates (although we have seen a few
exceptions).  Once we have more numerical experience with
the use of DLC's, we will come back and revise the above
discussion of coordinate choices.  In the meantime, they
are quite simple to choose, so give them a go.

the role of symmetry

At the end of a succesful geometry search, you will
have a set of coordinates where the gradient of the energy
is zero.  However your newly discovered stationary point
is not necessarily a minimum or saddle point!

This apparent mystery is due to the fact that the
gradient vector transforms under the totally symmetric
representation of the molecular point group.  As a direct
consequence, a geometry search is point group conserving.
(For a proof of these statements, see J.W.McIver and
A.Komornicki, Chem.Phys.Lett., 10,303-306(1971)).  In
simpler terms, the molecule will remain in whatever point
group you select in \$DATA, even if the true minimum is in
some lower point group.  Since a geometry search only
explores totally symmetric degrees of freedom, the only
way to learn about the curvatures for all degrees of
freedom is RUNTYP=HESSIAN.

As an example, consider disilene, the silicon analog
of ethene.  It is natural to assume that this molecule is
planar like ethene, and an OPTIMIZE run in D2h symmetry
will readily locate a stationary point.  However, as a
calculation of the hessian will readily show, this
structure is a transition state (one imaginary frequency),
and the molecule is really trans-bent (C2h).  A careful
worker will always characterize a stationary point as
either a minimum, a transition state, or some higher order
stationary point (which is not of great interest!) by
performing a RUNTYP=HESSIAN.

The point group conserving properties of a geometry
search can be annoying, as in the preceeding example, or
advantageous.  For example, assume you wish to locate the
transition state for rotation about the double bond in
ethene.  A little thought will soon reveal that ethene is
D2h, the 90 degrees twisted structure is D2d, and
structures in between are D2.  Since the saddle point is
actually higher symmetry than the rest of the rotational
surface, you can locate it by RUNTYP=OPTIMIZE within D2d
symmetry.  You can readily find this stationary point with
the diagonal guess hessian!  In fact, if you attempt to do
a RUNTYP=SADPOINT within D2d symmetry, there will be no
totally symmetric modes with negative curvatures, and it
is unlikely that the geometry search will be very well
behaved.

Although a geometry search cannot lower the symmetry,
the gain of symmetry is quite possible.  For example, if
you initiate a water molecule optimization with a trial
structure which has unequal bond lengths, the geometry
search will come to a structure that is indeed C2v (to
within OPTTOL, anyway).  However, GAMESS leaves it up to
you to realize that a gain of symmetry has occurred.

In general, Mother Nature usually chooses more
symmetrical structures over less symmetrical structures.
Therefore you are probably better served to assume the
higher symmetry, perform the geometry search, and then
check the stationary point's curvatures.  The alternative
your system regains higher symmetry.  The problem with
this approach is that you don't necessarily know which
subgroup is appropriate, and you lose the great speedups
GAMESS can obtain from proper use of symmetry.  It is good
to note here that "lower symmetry" does not mean simply
changing the name of the point group and entering more
atoms in \$DATA, instead the nuclear coordinates themselves
must actually be of lower symmetry.

practical matters

Geometry searches do not bring the gradient exactly to
zero.  Instead they stop when the largest component of the
gradient is below the value of OPTTOL, which defaults to
a reasonable 0.0001.   Analytic hessians usually have
residual frequencies below 10 cm**-1 with this degree of
optimization.  The sloppiest value you probably ever want
to try is 0.0005.

If a geometry search runs out of time, or exceeds
NSTEP, it can be restarted.  For RUNTYP=OPTIMIZE, restart
with the coordinates having the lowest total energy
(do a string search on "FINAL").  For RUNTYP=SADPOINT,
(do a string search on "RMS", which means root mean
square).
These are not necessarily at the last geometry!

The "restart" should actually be a normal run, that is
you should not try to use the restart options in \$CONTRL
(which may not work anyway).  A geometry search can be
restarted by extracting the desired coordinates for \$DATA
from the printout, and by extracting the corresponding
supplied, the program is able to save the time it would
ordinarily take to compute the wavefunction and gradient
at the initial point, which can be a substantial savings.
There is no input to trigger reading of a \$GRAD group: if
group actually corresponds to the coordinates in \$DATA, as
GAMESS has no check for this.

Sometimes when you are fairly close to the minimum, an
OPTIMIZE run will take a first step which raises the
energy, with subsequent steps improving the energy and
perhaps finding the minimum.  The erratic first step is
caused by the GUESS hessian.  It may help to limit the size
of this wrong first step, by reducing its radius, DXMAX.
Conversely, if you are far from the minimum, sometimes you
can decrease the number of steps by increasing DXMAX.

When using internals, the program uses an iterative
process to convert the internal coordinate change into
Cartesian space.  In some cases, a small change in the
internals will produce a large change in Cartesians, and
thus produce a warning message on the output.  If these
warnings appear only in the beginning, there is probably
no problem, but if they persist you can probably devise
a better set of coordinates.  You may in fact have one of
the two problems described in the next paragraph.  In
some cases (hopefully very few) the iterations to find
the Cartesian displacement may not converge, producing a
second kind of warning message.  The fix for this may
very well be a new set of internal coordinates as well,
or adjustment of ITBMAT in \$STATPT.

There are two examples of poorly behaved internal
coordinates which can give serious problems.  The first
of these is three angles around a central atom, when
this atom becomes planar (sum of the angles nears 360).
The other is a dihedral where three of the atoms are
nearly linear, causing the dihedral to flip between 0 and
180.  Avoid these two situations if you want your geometry
search to be convergent.

Sometimes it is handy to constrain the geometry search
by freezing one or more coordinates, via the IFREEZ array.
For example, constrained optimizations may be useful while
trying to determine what area of a potential energy surface
contains a saddle point.  If you try to freeze coordinates
with an automatically generated \$ZMAT, you need to know
that the order of the coordinates defined in \$DATA is

y
y  x r1
y  x r2  x a3
y  x r4  x a5  x w6
y  x r7  x a8  x w9

and so on, where y and x are whatever atoms and molecular
connectivity you happen to be using.

Finding minima is relatively easy.  There are large
tables of bond lengths and angles, so guessing starting
geometries is pretty straightforward.  Very nasty cases
may require computation of an exact hessian, but the
location of most minima is straightforward.

In contrast, finding saddle points is a black art.
The diagonal guess hessian will never work, so you must
provide a computed one.  The hessian should be computed at
your best guess as to what the transition state (T.S.)
should be.  It is safer to do this in two steps as outlined
above, rather than HESS=CALC.  This lets you verify you
have guessed a structure with one and only one negative
curvature.  Guessing a good trial structure is the hardest

This point is worth iterating.  Even with sophisticated
step size control such as is offered by the QA/TRIM or RFO
methods, it is in general very difficult to move correctly
from a region with incorrect curvatures towards a saddle
point.  Even procedures such as CONOPT or RUNTYP=GRADEXTR

The RUNTYP=HESSIAN's normal coordinate analysis is
rigorously valid only at stationary points on the surface.
This means the frequencies from the hessian at your trial
geometry are untrustworthy, in particular the six "zero"
frequencies corresponding to translational and rotational
(T&R) degrees of freedom will usually be 300-500 cm**-1,
and possibly imaginary.  The Sayvetz conditions on the
from the real vibrational modes.  If you have defined a
\$ZMAT, the PURIFY option within \$STATPT will help zap out
these T&R contaminants).

If the hessian at your assumed geometry does not have
one and only one imaginary frequency (taking into account
that the "zero" frequencies can sometimes be 300i!), then
it will probably be difficult to find the saddle point.
Instead you need to compute a hessian at a better guess
below.

If you need to restart your run, do so with the
coordinates which have the smallest RMS gradient.  Note
that the energy does not necessarily have to decrease in a
SADPOINT run, in contrast to an OPTIMIZE run.  It is often
necessary to do several restarts, involving recomputation
of the hessian, before actually locating the saddle point.

Assuming you do find the T.S., it is always a good
idea to recompute the hessian at this structure.  As
described in the discussion of symmetry, only totally
symmetric vibrational modes are probed in a geometry
search.  Thus it is fairly common to find that at your
"T.S." there is a second imaginary frequency, which
corresponds to a non-totally symmetric vibration.  This
means you haven't found the correct T.S., and are back to
the drawing board.  The proper procedure is to lower the
point group symmetry by distorting along the symmetry
breaking "extra" imaginary mode, by a reasonable amount.
Don't be overly timid in the amount of distortion, or the
next run will come back to the invalid structure.

The real trick here is to find a good guess for the
transition structure.  The closer you are, the better.  It
is often difficult to guess these structures.  One way
around this is to compute a linear least motion (LLM)
path.  This connects the reactant structure to the product
structure by linearly varying each coordinate.  If you
generate about ten structures intermediate to reactants
and products, and compute the energy at each point, you
will in general find that the energy first goes up, and
then down.  The maximum energy structure is a "good" guess
for the true T.S. structure.  Actually, the success of
this method depends on how curved the reaction path is.

A particularly good paper on the symmetry which a
saddle point (and reaction path) can possess is by
P.Pechukas, J.Chem.Phys. 64, 1516-1521(1976)

mode following

In certain circumstances, METHOD=RFO and QA can walk
from a region of all positive curvatures (i.e. near a
minimum) to a transition state.  The criteria for whether
this will work is that the mode being followed should be
only weakly coupled to other close-lying Hessian modes.
Especially, the coupling to lower modes should be almost
zero.  In practise this means that the mode being followed
should be the lowest of a given symmetry, or spatially far
away from lower modes (for example, rotation of methyl
groups at different ends of the molecule). It is certainly
possible to follow also modes which do not obey these
criteria, but the resulting walk (and possibly TS location)
will be extremely sensitive to small details such as the
stepsize.

This sensitivity also explain why TS searches often
fail, even when starting in a region where the Hessian has
the required one negative eigenvalue.  If the TS mode is
strongly coupled to other modes, the direction of the mode
is incorrect, and the maximization of the energy along
that direction is not really what you want (but what you
get).

Mode following is really not a substitute for the
ability to intuit regions of the PES with a single local
negative curvature.  When you start near a minimum, it
matters a great deal which side of the minima you start
from, as the direction of the search depends on the sign
trying to use IFOLOW, namely the papers by Frank Jensen
C.J.Tsai, K.D.Jordan, J.Phys.Chem. 97, 11227-11237 (1993)
which is quite illuminating on the sensitivity of mode
following to the initial geometry point.

Note that GAMESS retains all degrees of freedom in its
hessian, and thus there is no reason to suppose the lowest
mode is totally symmetric. Remember to lower the symmetry
in the input deck if you want to follow non-symmetric
modes.  You can get a printout of the modes in internal
coordinate space by a EXETYP=CHECK run, which will help
you decide on the value of IFOLOW.

* * *

CONOPT is a different sort of saddle point search
procedure.  Here a certain "CONstrained OPTimization" may
be considered as another mode following method.  The idea
is to start from a minimum, and then perform a series of
optimizations on hyperspheres of increasingly larger
radii.  The initial step is taken along one of the Hessian
modes, chosen by IFOLOW, and the geometry is optimized
subject to the constraint that the distance to the minimum
is constant.  The convergence criteria for the gradient
norm perpendicular to the constraint is taken as 10*OPTTOL,
and the corresponding steplength as 100*OPTTOL.

After such a hypersphere optimization has converged, a
step is taken along the line connecting the two previous
optimized points to get an estimate of the next hyper-
sphere geometry.  The stepsize is DXMAX, and the radius of
hyperspheres is thus increased by an amount close (but not
equal) to DXMAX.  Once the pure NR step size falls below
DXMAX/2 or 0.10 (whichever is the largest) the algorithm
switches to a straight NR iterate to (hopefully) converge
on the stationary point.

The current implementation always conducts the search
in cartesian coordinates, but internal coordinates may be
printed by the usual specification of NZVAR and ZMAT.  At
present there is no restart option programmed.

CONOPT is based on the following papers, but the actual
implementation is the modified equations presented in
Frank Jensen's paper mentioned above.
Y. Abashkin, N. Russo,
J.Chem.Phys. 100, 4477-4483(1994).
Y. Abashkin, N. Russo, M. Toscano,
Int.J.Quant.Chem.  52, 695-704(1994).

There is little experience on how this method works in
practice, experiment with it at your own risk!

```

created on 7/7/2017