Linear interval arithmetic and Hansen's linear interval arithmetic share a common motivation. Hansen's linear interval arithmetic is, however, a closer relative of the derivative-based methods.
In , each interval is represented as a sampled value v, and a slope d:
Bounds produced using Hansen's linear interval arithmetic are generally superior to bounds produced using a derivative-based method, as the relationship(s) between the free variable(s) and derivative(s) may be taken into account, as with linear interval arithmetic. An example evaluation is appropriate; let us bound the range of g(x) for , by evaluating g([0,1]). Let ; with Hansen's linear interval arithmetic,
Hansen's linear interval arithmetic and our linear interval arithmetic perform a similar amount of work computing bounds for any given operator. Consider the operator , which has the following evaluation rule:
It is not completely clear why Hansen chose to evaluate using
With either approach, fewer floating-point operations may be used to compute bounds, if looser bounds are acceptable. Derivative methods are easily implemented and produce slightly larger bounds at a slightly higher evaluation cost. The efficiency graphs illustrate that each approach differs, in computational efficiency, by a constant factor.
With a modern language, such as C++, Hansen's intervals may be built using an underlying interval arithmetic class. Our methods may be modularly constructed by using an underlying linear bound class, which observes the current rounding mode.
It is unreasonable to expect a common optimizing compiler to produce code comparable to a direct implementation, as symbolic reasoning is employed when producing an efficient implementation. The author advocates the implementation of a program which employs sophisticated symbolic reasoning to automatically produce ``direct'' implementations, for any of a variety of interval arithmetics. Such a program would be given a description of an operator's properties, and produce a code fragment which implements the corresponding interval operator. Such routines may be folded into an optimizing compiler, but it is unclear what algorithms would benefit, other than interval arithmetic classes.
The chief advantage of our approach to generalized interval arithmetic is its mathematical simplicity, which allows for properties to be naturally tracked. This simplicity also provides for a superior handling of discontinuous functions, and multi-functions.
The chief advantage of Hansen's approach is that is easily implemented, given an implementation of . Of course, is implemented with even less work. With a naive implementation, both and return sub-optimal bounds. Regardless, as j shrinks, the relative differences between , , and approach zero. With effort, better bounds may be returned, although this mitigates the cheif advantage of the two methods.
Our generalized interval arithmetic may be implemented using Hansen's methods, or using derivative-based methods. Temporary recourse to the methods outlined in this thesis is possible when considering non-differentiable operators.
Finally, it should be noted that with several minor changes to Hansen's fundamental definition of intervals, we may produce our fundamental definition of intervals. With , and , this proceeds as follows: for an interval , let vary from 0 to 1, rather than from -c to c; let b be a member of , rather than a member of .
Jeff Tupper | March 1996 |