Decimal arithmetic Copyright (c) IBM Corporation, 2000. All rights reserved. © 11 Jan 2000 [previous | contents | next]

## Design concepts

The decimal arithmetic defined here was designed with people in mind, and necessarily has a paramount guiding principle -- computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.

Many people are unaware that the algorithms taught for 'manual' decimal arithmetic are quite different in different countries, but fortunately (and not surprisingly) the end results differ only in details of presentation.

The arithmetic described here was based on an extensive study of decimal arithmetic and was then evolved over several years (1978-1982) in response to feedback from thousands of users in more than forty countries. Later minor refinements were made during the process of ANSI standardization.

In the past sixteen years the arithmetic has been used successfully for hundreds of thousands of applications covering the entire spectrum of computing; among other fields, that spectrum includes operating system scripting, text processing, commercial data processing, engineering, scientific analysis, and pure mathematics research. From this experience we are confident that the various defaults and other design choices are sound.

#### Fundamental concepts

When people carry out arithmetic operations, such as adding or multiplying two numbers together, they commonly use decimal arithmetic where the decimal point 'floats' as required, and the result that they eventually write down depends on three items:
1. the specific operation carried out
2. the explicit information in the operand or operands themselves
3. the information from the implied context in which the calculation is carried out (the precision required, etc.).

The information explicit in the written representation of an operand is more than that conventionally encoded for floating point arithmetic. Specifically, there is:

• an optional sign (only significant when negative)
• a numeric part, or numeric, which may include a decimal point (which is only significant if followed by any digits)
• an optional exponent, which denotes a power of ten by which the numeric is multiplied (significant if both the numeric and exponent are non-zero).
The length of the numeric and original position of the decimal point are not encoded in traditional floating point representations, such as ANSI/IEEE 854-1987,[1]  yet they are essential information if the expected result is to be obtained.

For example, people expect trailing zeros to be indicated properly in a result: the sum 1.57 + 2.03 should result in 3.60, not 3.6; however, if the positional information has been lost during the operation it is no longer possible to show the expected result.

Similarly, decimal arithmetic in a scientific or engineering context is based on a floating point model, not a fixed point or fixed scale model (indeed, this is the original basis for the concepts behind binary floating point). Fixed point decimal arithmetic packages such as ADAR[2]  or the BigDecimal class in Java 1.1 are therefore only useful for a subset of the problems for which arithmetic is used.

The information contained in the context of a calculation is also important. It usually applies to an entire sequence of operations, rather than to a single operation, and is not associated with individual operands. In practice, sensible defaults can be provided, though provision for user control is necessary for many applications.

The most important contextual information is the desired precision for the calculation. This can range from rather small values (such as six digits) through very large values (hundreds or thousands of digits) for certain problems in Mathematics and Physics. Most decimal arithmetics implemented to date (for example, the decimal arithmetic in the Atari OS,[3]  or in the IEEE 854-1987 standard referred to above) offer just one or two alternatives for precision -- in some cases, for apparently arbitrary reasons. Again, this does not match the user model of decimal arithmetic; one designed for people to use must provide a wide range of available precisions.

The provision of context for arithmetic operations is therefore a necessary precondition if the desired results are to be achieved, just as a 'locale' is needed for operations involving text.

This proposal provides for explicit control over four aspects of the context: the required precision -- the point at which rounding is applied, the rounding algorithm to be used when digits have to be discarded, the preferred form of exponential notation to be used for results, and whether lost digits checking is to be applied. Other items could be included as future extensions.

#### Embodiment of the concepts

The two kinds of information described (operands and context) are conveniently and naturally represented by two classes for Java: one that represents decimal numbers and implements the operations on those numbers, and one that simply represents the context for decimal arithmetic operations. It is proposed that these classes be called BigDecimal and MathContext respectively. The BigDecimal class enhances the original class of that name by adding floating point arithmetic.

The defaults for the context have been tuned to satisfy the expectations of the majority of users, and have withstood the test of time well. In the vast majority of cases, therefore, the default MathContext object is all that is required.

Footnotes:
 [1] ANSI/IEEE 854-1987 -- IEEE Standard for Radix-Independent Floating-Point Arithmetic, The Institute of Electrical and Electronics Engineers, Inc., New York, 1987. [2] 'Ada Decimal Arithmetic and Representations' See An Ada Decimal Arithmetic Capability, Brosgol et al. 1993. http://www.cdrom.com/pub/ada/swcomps/adar/ [3] See, for example, The [Atari] Floating Point Arithmetic Package, C. Lisowski. http://intrepid.mcs.kent.edu/%7Eclisowsk/8bit/atr11.html

[previous | contents | next]