A slightly subtle issue with binary representations of money... This is an issue that I ran into a few times on a stock trading application that initially used double precision floating point to represent monetary values.
Consider the fractional value (in base ten) 11/16. In exact radix form, that's 0.6875. Rounded to one decimal digit, that would be 0.7; to two digits 0.69, and to three digits either 0.687 or 0.688 (depending on the rounding rules you choose). That's the way we usually think of numbers; hopefully you find none of that surprising.
But now look at what happens in binary representations of the same number. The fractional form is 1011/10000 and the radix form is 0.1011. If you round that form to three bits (roughly equivalent to one decimal digit), you get 0.110 (or possibly 0.101 with different rounding rules). When you convert that value to decimal for presentation to one of those pesky humans, you get ... 0.75 (or possibly 0.625).
There's nothing actually wrong with the way binary is rounding, it's just (very) unexpected to the average person looking at a rounded monetary value. The “expected” value of 0.7 for single digit rounding is (in binary) 0.10110011001100... There's simply no way to round in binary and get a result like that!
This rounding problem crops up most often when doing division operations. In those stock trading applications, we kept getting it in two places: stock quotes in fractional values (still common in some exchanges, though thankfully not in the U.S.), and when calculating average price for a series of related trades. The latter problem caused us much grief when the system on the other end represented money using decimal numbers – our average price calculation would give a different result than theirs, we'd get a mismatch, and a human would have to intervene to figure it all out. That human was not a programmer, so from their point of view our binary rounding was simply wrong. We never really fixed this problem until we switched to decimal representation...