Copyright (c) IBM Corporation, 2000. All rights reserved. ©
|11 Jan 2000|
|[previous | contents | next]|
Many people are unaware that the algorithms taught for 'manual' decimal arithmetic are quite different in different countries, but fortunately (and not surprisingly) the end results differ only in details of presentation.
The arithmetic described here was based on an extensive study of decimal arithmetic and was then evolved over several years (1978-1982) in response to feedback from thousands of users in more than forty countries. Later minor refinements were made during the process of ANSI standardization.
In the past sixteen years the arithmetic has been used successfully for hundreds of thousands of applications covering the entire spectrum of computing; among other fields, that spectrum includes operating system scripting, text processing, commercial data processing, engineering, scientific analysis, and pure mathematics research. From this experience we are confident that the various defaults and other design choices are sound.
The information explicit in the written representation of an operand is more than that conventionally encoded for floating point arithmetic. Specifically, there is:
For example, people expect trailing zeros to be indicated properly in a result: the sum 1.57 + 2.03 should result in 3.60, not 3.6; however, if the positional information has been lost during the operation it is no longer possible to show the expected result.
Similarly, decimal arithmetic in a scientific or engineering context is based on a floating point model, not a fixed point or fixed scale model (indeed, this is the original basis for the concepts behind binary floating point). Fixed point decimal arithmetic packages such as ADAR or the BigDecimal class in Java 1.1 are therefore only useful for a subset of the problems for which arithmetic is used.
The information contained in the context of a calculation is also important. It usually applies to an entire sequence of operations, rather than to a single operation, and is not associated with individual operands. In practice, sensible defaults can be provided, though provision for user control is necessary for many applications.
The most important contextual information is the desired precision for the calculation. This can range from rather small values (such as six digits) through very large values (hundreds or thousands of digits) for certain problems in Mathematics and Physics. Most decimal arithmetics implemented to date (for example, the decimal arithmetic in the Atari OS, or in the IEEE 854-1987 standard referred to above) offer just one or two alternatives for precision -- in some cases, for apparently arbitrary reasons. Again, this does not match the user model of decimal arithmetic; one designed for people to use must provide a wide range of available precisions.
The provision of context for arithmetic operations is therefore a necessary precondition if the desired results are to be achieved, just as a 'locale' is needed for operations involving text.
This proposal provides for explicit control over four aspects of the context: the required precision -- the point at which rounding is applied, the rounding algorithm to be used when digits have to be discarded, the preferred form of exponential notation to be used for results, and whether lost digits checking is to be applied. Other items could be included as future extensions.
The defaults for the context have been tuned to satisfy the expectations of the majority of users, and have withstood the test of time well. In the vast majority of cases, therefore, the default MathContext object is all that is required.
|||ANSI/IEEE 854-1987 -- IEEE Standard for Radix-Independent Floating-Point Arithmetic, The Institute of Electrical and Electronics Engineers, Inc., New York, 1987.|
|||'Ada Decimal Arithmetic and Representations'
See An Ada Decimal Arithmetic Capability, Brosgol et al. 1993.
|||See, for example,
The [Atari] Floating Point Arithmetic Package, C.