Created attachment 177165 [details] Showing the issue The .99999999 is a problem
I accidentally pressed submit before making the proper form, so the title should be about the .9999999 not floats vs integers. The first part (eg 18.3) is a float too. But there is inconsistency in some results being shown OK as integers and then some being with the .999999999...
Created attachment 177166 [details] Showing the issue better with the (C10-C9) * 1000
This is not a bug. The floating-point error depends on specific operands, and thus it's normal that for some combinations of operands, the resulting value falls within the precision of number display, while for others, it happens to be visible. https://wiki.documentfoundation.org/Faq/Calc/Accuracy
Just to explain what's happening here in detail - step by step. The calculation in question is (10.7-9.8)*1000 10.7 is represented in IEEE 754 64-bit as (exactly): [10.699999999999999]289457264239899814128875732421875 It's wrong after 17th significant decimal (shown using brackets), and is *less than* ideal number. 9.8 is represented in IEEE 754 64-bit as (exactly): [9.800000000000000]710542735760100185871124267578125 It's wrong after 16th significant decimal (shown using brackets), and is *greater than* ideal number. Note how the opposite direction of the representation errors will *add up* when performing subtraction. Their ideal difference is 0.9, i.e. all digits to the right of the decimal point are cancelled by the subtraction. Note that this cancels two digits of first operand, and one digit of second operand; so the actual representations' errors are now after 15th decimal digit of the result. 10.7-9.8 is represented in IEEE 754 64-bit as (exactly): 0.[89999999999999]857891452847979962825775146484375 It's wrong after 14th significant decimal (shown using brackets). Multiplying by 1000 gives in IEEE 754 64-bit as (exactly): [899.99999999999]863575794734060764312744140625 It's wrong after same 14th significant decimal (shown using brackets). On your screenshots, Calc shows 15 significant decimals (maximum possible - if using more decimals, they all will be shown as zeroes). This means, that it must round to what is shown below in brackets: [899.999999999998]63575794734060764312744140625 The '6' after the shown part rounds the preceding '8' up, so it results in 899.999999999999 which is what you see. At all steps, all calculations are performed correctly. Unlike serial summation, here there is no algorithm that could improve accuracy of such calculation AFAIK.
Ok, thank you for the explanation... I thought there was an issue with the calculations themselves. Out of curiosity, do you happen to know if Excel has the same "issue"? If not how are they dealing with it?
Created attachment 177174 [details] Same calculation in Excel 2016 (In reply to alekshs from comment #6) It calculates the same :)
Aha... thank you :D
Created attachment 177184 [details] Gnumeric with the same calculation FTR: this is what Gnumeric (that is known to be generally more corect than Excel) does in this case (I was interested to see that it shows 16 significant digits).