Decimal Arithmetic FAQ
Part 2 – Definitions
think 10
Copyright © IBM Corporation, 2000, 2007. All rights reserved.

Contents [back to FAQ contents]

What does Algorism mean?

Algorism is the name for the Indo-Arabic decimal system of writing and working with numbers, in which symbols (the ten digits 0 through 9) are used to describe values using a place value system, where each symbol has ten times the weight of the one to its right.

This system was originally invented in India in the 6th century AD (or earlier), and was soon adopted in Persia and in the Arab world. Persian and Arabian mathematicians made many contributions (including the concept of the decimal fractions as an extension of the notation), and the written European form of the digits is derived from the ghubar (sand-table or dust-table) numerals used in north-west Africa and Spain.

The word algorism comes from the Arabic al-Kowarizmi (“the one from Kowarizm”), the cognomen (nickname) of an early-9th-century mathematician, possibly from what is now Khiva in western Uzbekistan. (From whose name also comes the word algorithm.)

See also Wikipedia: Algorism.

What does precision mean?

In English, the word precision generally means the state of being precise, or defines the repeatability of a measurement etc.

In computing and arithmetic, it has some more specific (and different) meanings (the first is the most common):

  1. The number of significant digits in a number (leading zeros are not considered significant). For example, the following numbers all have three significant digits:
      123   1.23   1.50   0.00123   1.23E+22
    and are said to have a precision of 3. The number zero is a special case; because it requires one digit to indicate its presence it is usually considered to have a precision of 1.

    A calculation which rounds to three digits is said to have a working precision or rounding precision of 3.

  2. The units of the least significant digit of a measurement. For example, if a measurement is 17.130 meters then its precision is millimeters (one unit in the last place, or ulp, is 1mm).

  3. (In some programming languages and databases.) The number of decimal places after the decimal point in a fixed-point number. To avoid confusion, this usage is best avoided.

See also Wikipedia: Precision.

What is the difference between a minus and an en dash?

A dash or a hyphen is vertically positioned lower than a minus because the latter is designed to be used alongside digits (which are typically the same height as capital letters) whereas dashes usually appear between lowercase letters.

“A picture is worth a thousand words” (and in this case is the only way to guarantee the differences are shown correctly): dashes, minuses, and hyphens are
  at different heights above the baseline

What are Normal numbers, Subnormal numbers, Emax, etc.?

These terms are derived or extrapolated from the IEEE 754 and 854 standards, and describe the various kinds of numbers that can be represented in a given computer encoding: The answers in Part 5 of the FAQ explain in more detail the meaning of some of the terms described here:

Overflow threshold
If a calculation results in a number whose magnitude is greater than or equal to the overflow threshold it is considered to have overflowed. Under IEEE 854 rules the result will then be either infinity or the largest representable number (depending on the rounding mode).
Underflow threshold
If a calculation results in a non-zero number whose magnitude would be less than the underflow threshold it is considered to have underflowed. The result will then sometimes be a subnormal number (see below), but it may be rounded down to zero or up to the threshold value.
Normal numbers
Any representable number which is greater than or equal to the underflow threshold and less than the overflow threshold is considered to be a normal number. The normal numbers form a balanced, or close to balanced, range (the underflow threshold × the overflow threshold equals either 10 or 100 for decimal encodings).
Largest normal number
The magnitude of the largest normal number.
Smallest normal number
The magnitude of the smallest normal number; this is also the underflow threshold.
Subnormal numbers
Non-zero numbers whose magnitude is less than the underflow threshold. These allow for gradual underflow, and are required by IEEE 854.
Supernormal numbers
Numbers whose magnitude is greater than or equal to the overflow threshold. Encodings do not necessarily support supernormal numbers, and they are not required by IEEE 854.
Maximum representable number
The magnitude of the largest finite number that an encoding can distinguish.
Minimum representable number
The magnitude of the smallest (tiniest) non-zero number that an encoding can distinguish. This is the smallest subnormal number.

Here’s a table illustrating the terms above, just showing positive numbers. The symbols on the left are sometimes used to refer to certain values. Example values are shown on the right are for the 32-bit decimal encoding with 7 digits of precision (decimal32).

Symbol Name Range Example
  Overflow threshold Supernormal number 10E+96
Nmax Largest normal number
(Maximum representable number)
Normal numbers 9.999999E+96
Unity One 1 (1E+0)
Nmin Smallest normal number
(Underflow threshold)
  Largest subnormal number Subnormal numbers 0.999999E-95
Ntiny Smallest subnormal number
(Minimum representable number)

Note that the example value of Ntiny could be written 0.000001E-95. The exponent of Nmax when written in scientific notation (+96 in the example) characterizes an encoding, and is called Emax. The exponent of the smallest normal number, -Emax+1, is called Emin, and the smallest possible exponent seen when a number is written in scientific notation (-101 in the example) is called Etiny. Etiny is Emin-(p-1), where p is the precision of the encoding (7 in these examples).

For more on the ordering of decimal numbers, see Which is larger? 7.5 or 7.500?

Please send any comments or corrections to Mike Cowlishaw,
Copyright © IBM Corporation 2000, 2007. All rights reserved.