Decimal Arithmetic FAQ
Part 3 – Hardware Questions
think 10
Copyright © IBM Corporation, 2000, 2007. All rights reserved.

Contents [back to FAQ contents]


Why doesn’t hardware support decimal arithmetic directly?

Most computer architectures other than ‘pure RISC’ machines do, in fact, provide some form of decimal arithmetic instructions or support. These include the Intel x86 architecture (used in PCs), the Motorola 68x architecture and its derivatives (used in Apple’s earlier machines and the Palm Pilot), the IBM System z (the descendents of the IBM/360 family), the HP PA-RISC architecture, and most general-purpose microprocessors.

However, for all of these machines, only integer decimal arithmetic is supported, and for most the support is limited to decimal adjustment or conversion instructions which simplify decimal operations. These instructions are only accessible through assembly-language programming, and lead to only small performance improvements. In all cases, any scaling, rounding, or exponent has to be handled explicitly by the applications or middleware programmer; a complex and very error-prone task.

The native (hardware) decimal floating-point arithmetic now available in the IBM Power6 processor and expected in the z6 microprocessor makes programming far simpler and more robust, and with much better performance than software (for details of the new hardware implementations, see the General Decimal Arithmetic page).

Why did computers use binary in the first place?

Many early computers (such as the ENIAC, or the IBM 650) were in fact decimal machines. In the 1950s, however, most computers turned to binary representations of numbers as this made useful reductions in the complexity of arithmetic units (for example, a binary adder requires about 15% less circuitry than a decimal adder). This reduction in turn led to greater reliability and lower costs.

Storing decimal integers in a simple binary coded decimal (BCD) form, rather than a pure binary form, also uses up to 20% more storage than the binary form, depending on the coding used.

Later, it was also shown that binary floating-point allows simpler error analysis, and for a given number of bits gives more accurate results for certain mathematical operations.

Decimal arithmetic, therefore, is inherently less efficient than binary arithmetic, and at the time this justified the switch to binary floating-point arithmetic (just as a two-digit representation for the year in a date was justifiable at the time). However, the programming and conversion overheads and other costs of using binary arithmetic suggest that hardware decimal arithmetic is now the more economical option for most applications.

Surely software emulation of decimal arithmetic is fast enough?

No, it is not. The performance of existing software libraries for decimal arithmetic is very poor compared to hardware speeds. In some applications, the cost of decimal calculations can exceed even the costs of input and output and can form as much as 90% of the workload. See “The ‘telco’ benchmark” for an example and measurements on several implementations.

Binary floating-point emulation in software was unacceptable for many applications until hardware implementations became available; the same is true for decimal floating-point (or even fixed-point) emulation today. Even using the decimal integer instructions on an IBM System z machine only improves fixed-point performance by about a factor of 10; rounding and scaling in software adds significant overhead.

Complaints about the performance of decimal arithmetic are extremely common. Software emulation is 100 to 1000 times slower than a hardware implementation could be. For example, a JIT-compiled 9-digit BigDecimal division in JavaTM 1.4 takes over 13,000 clock cycles on an Intel Pentium. Even a simple 9-digit decimal addition takes at least 1,100 clock cycles. In contrast, a native hardware decimal type could reduce this to a speed comparable to binary arithmetic (which is 41 cycles for integer division on the Pentium, or 3 for an addition).

The benefits are even larger for multiplications and divisions at the higher precisions often used in decimal arithmetic; a 31-digit division can take 20–110 times as long as a 9-digit addition.

Do ‘hand-held’ calculators use decimal arithmetic?

Yes. The first microprocessor-based electronic calculator, the Busicom (actually a desk-top machine), used its Intel 4004 to implement decimal arithmetic in 1970.

Later, the Hewlett Packard HP-71B calculator used a 12-digit internal decimal floating-point format (expanded to 15 digits for intermediate calculations), to implement the IEEE 854 standard.

Today, the Texas Instruments TI-89 and similar calculators use a 14-digit or 16-digit Binary Coded Decimal internal floating-point format with a three digit exponent. HP calculators continue to use a 12-digit decimal format; Casio calculators have a 15-digit decimal internal format.

These all use software to implement the arithmetic, as single-calculation performance is not usually an issue. Oddly, most calculators discard trailing fractional zeros.

Has any company formally announced hardware decimal floating-point support?

Yes. IBM announced on 18 April 2007 hardware decimal floating point facilities for IBM z9 EC and z9 BC:

Since then, IBM has also announced support for decimal floating-point in the Power6 processors, and has released details of the decimal floating-point unit in the z6 microprocessor. For details of these hardware implementations, see the General Decimal Arithmetic page.


Please send any comments or corrections to Mike Cowlishaw, mfc@speleotrove.com
Copyright © IBM Corporation 2000, 2007. All rights reserved.