Hmm. One of the exercises in this book encourages you to use a float for prices, due to the decimal points. But I wonder if it wouldn't make more sense to use an int (or even a short) and express the price in cents. I would imagine, if you were doing a number of operations on the variable, it would be more efficient to use a smaller variable and then divide by 100 at the end to get an output in cents. Where's the break-even point between # of calls for a variable vs. size of the variable? Does it even work like that? See, the book doesn't tell me, so I'm guessing.
Though I suppose you'd still have to define a float for the final output in cents, so it might not improve much. But how do I know what's sound design?