Bug 158219 - Calc should default to using another representation over FP64 (double precision floating-point)
Summary: Calc should default to using another representation over FP64 (double precisi...
Status: UNCONFIRMED
Alias: None
Product: LibreOffice
Classification: Unclassified
Component: Calc (show other bugs)
Version:
(earliest affected)
Inherited From OOo
Hardware: All All
: medium normal
Assignee: Not Assigned
URL:
Whiteboard:
Keywords:
Depends on:
Blocks:
 
Reported: 2023-11-14 22:01 UTC by Eyal Rozenberg
Modified: 2023-12-15 17:06 UTC (History)
3 users (show)

See Also:
Crash report or crash signature:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Eyal Rozenberg 2023-11-14 22:01:06 UTC
When a user enter fractional values in a spreadsheet, they expect the representation of that value to be exact, not an approximation by a base-2 representation like FP32 (IEEE 754 single-precision) FP64 (double-precision) etc. They meant the exact value they entered.

Similarly, when a user sets a cell to the result of an additive formula involving decimal values (immediate or via cell reference), they expect the result to be exact, not approximate.

For example, users expects the formula =(3100099 - 3200012)*0.01 to yield 99913 , which it does. But users also expect =3100099*0.01 - 3200012*0.01 to yield the same value... which it does not. This second formula yields -999.1299999999970000 instead of -999.13

Now, it is true that if a user is somewhat knowledgeable in matters of representation, they would understand why this happens. But most users are not; and moreover - this is not the user's _intent_.

Now, obviously, a binary computer cannot perform infinite-accuracy computations, so there must be limits to our "indulgence". However - we are far from where this limit should be. A user is likely to accept that when they use trigonometric, exponential, logarithmic and other such functions - accuracy will be imperfect (and, in fact, most users don't try any of that stuff intentionally).

At the very least, I argue that computations all of whose steps maintain   representability in decimal form should be performed with perfect accuracy. This would require two "bigint"s for (signed) mantissa and base-10 exponent.

Other possibilities could be:

* Divisor and dividend
* mantissa, exponent and basis (so, three "bigint"s)


... now, I know what you're going to say: It's a huge change, it's really fundamental, it goes against half of everything we've written for 35 years, nobody will ever have the time to spare to seriously work on this - maybe.

But it's the right thing to do in principle. And as for mitigating the pain:

* The default could change with a new ODF version, so that old documents maintain existing behavior.
* There would be a choice of underlying representation model, e.g. for compatibility and for lovers of binary.
* It could start as an opt-in before it becomes the default.

See also bug 128312.
Comment 1 Mike Kaganski 2023-11-15 04:44:35 UTC
(In reply to Eyal Rozenberg from comment #0)
> ... now, I know what you're going to say: It's a huge change, it's really
> fundamental, it goes against half of everything we've written for 35 years,
> nobody will ever have the time to spare to seriously work on this - maybe.
> 
> ...
> 
> See also bug 128312.

Huh? You yourself provided a link to the bug report, where this was answered. And even putting aside the fact that your suggestions do not change things - there will still be only a limited accuracy - the only thing discussed there was not "it's huge work", but - only performance.
Comment 2 Stéphane Guillou (stragu) 2023-11-15 07:46:20 UTC
See also recent bug 155795.