Description: Bad situation: using writer for a medium project, ~160 pages, password protected, headers, footers, tables, images, captions, table of content, list of figures, list of tables, footnotes, endnotes, cross-references ... endphase and suddenly extremely slow, 100% CPU load by soffice.bin, "running ants" and "repagination" in the bottom row, unusable. Save file and restart - reload not better. "Old" version from two days ago better, CPU load between 2 and 15 percent. Throwing out all ( two ) notes ( were new and I remember notes slow from calc ), scrolling back and forth multiple times through the document, then save and reload mildered while not healing the problem. Anybody an idea which risk LO produces for people to miss their deadlines, fail for their degrees with such minefield behavior? _assumption_ file somehow pested by inserting the notes. _assumption_ Alas I cannot share the file. Steps to Reproduce: See description. Actual Results: slow Expected Results: normal performance Reproducible: Always User Profile Reset: No Additional Info: Environment: Kali ( Debian ) Linux. Version: 24.8.5.2 (X86_64) / LibreOffice Community Build ID: 480(Build:2) CPU threads: 8; OS: Linux 6.8; UI render: default; VCL: x11 Locale: en-US (en_US.UTF-8); UI: en-US Debian package version: 4:24.8.5-2 Calc: threaded Writer was started with GDK_SCALE=1 SAL_FORCEDPI=192 libreoffice --writer to avoid much too tiny icons on a 4k display.
It will be hard to address this without sample file [https://wiki.documentfoundation.org/QA/Bugzilla/Sanitizing_Files_Before_Submission] There a plethora of possible causes. For example tables splitting wrongly to image anchor being at the wrong spot. Aside from the file format being of influence. The only advice is; * resetting user profile * using older version or trying a more recent version. * using alternative Office Suite * trying the same file on a different system (including virtual machine) ---------- Yes, LibreOffice is kind of a minefield. There are risk using LibreOffice for complicated documents and delicate work. However that's my personal opinion. And regular updates doesn't mean that incremental improvement. Old problems get fix, new issues are introduced. If LibreOffice being the proper tool surely depends on use-case. Apparently it works for majority of the users..
hello Telesto, nice to see you alive and active ... > It will be hard to address this without sample file Yes, I know, but I don't see the point in investigating, providing better info, with the only reaction being asked two years later to check if the issue vanished in the meantime. The post is more a warning to users and a demand to programmers ... try to produce stable code. > There a plethora of possible causes. Yes, and each "I think it helps" instable patch doubles them. > For > example tables splitting wrongly to image anchor being at the wrong spot. > Aside from the file format being of influence. All these shouldn't come up in stable programs, and we can't blame innocent unwitting users about it. > The only advice is; > * resetting user profile > * using older version or trying a more recent version. > * using alternative Office Suite > * trying the same file on a different system (including virtual machine) And all that is quite poor "poking in the fog" ... > ---------- > Yes, LibreOffice is kind of a minefield. There are risk using LibreOffice > for complicated documents and delicate work. However that's my personal > opinion. And regular updates doesn't mean that incremental improvement. Old > problems get fix, new issues are introduced. If LibreOffice being the proper > tool surely depends on use-case. Apparently it works for majority of the > users.. Yes, it's not useless, however a better coding discipline and some rework might be beneficial. An idea to help people who have saved older states ... provide an option for side by side compare of the old and new version, with highlighting differences and easy transfer would save a lot of time in the experimenting you proposed. Just my two cents ...
(In reply to b. from comment #2) > Yes, it's not useless, however a better coding discipline and some rework > might be beneficial. Not sure if it's coding discipline. Say: table layout isn't static, it's calculated on the fly on each edit. And there plethora of variables. Embedded tables, with embedded tables again. Merged rows, merged columns. Different font sizes. So bugs are supposed to happen. It's highly complex. Each time the 'refactor' certain area to accommodate today's need, new bugs get introduced. From stuff that got overlooked too, old code simple working by coincidence, to performance issues with compatibility layers, bugs being masked by certain behaviour Their is plethora of reasons A) primary lack of resources: developers (enough bugs in the bug tracker) So some developer will take up the task to refactor area XYZ. However gets 'flooded' by the fall-out. Unable to address it all alone (feeling overwhelmed). Others are simply starts collecting the bugs, so he/she someday able to fix those bugs at once when 'familiar' again with the code and when he she has time. B) Lack of testers C) fixed release schedule. Each time presenting something 'new'. With changes made last minute; with no time to check for side-effects. Holding off has no purpose either: as stated above: lack of testers. Nor developers to solve them. D) Need to compete with competitors. Adding half-baked features to be able to compete. However issues aren't ironed out by lack of resources to fix them (and/or testers to find theme before release) At the developer side A) Volunteering developers have only so many free hours and needs to be 'fun'. Aside from personal life changes. Getting married, having kids etc. The move on B) Paid developers are able to spend more hours. However the must be feel up to the task; the also need enjoy working on the code. Coding can be terrible frustrating; whack a mole Ideally there is an 'army' of lets say 30/40 full time paid developers working 5 day's a week improving the code (based on the size of the project). With a support team in the background. In reality it's more of 5-10 (part-time) developers, most hired by an eco-system partner getting X time from the employer to work on LibreOffice at choice or getting a designed task (or something like that). So couple of weeks nothing, couple of changes, and back to radio silence (busy with other tasks). The whole project is quite understaffed and/or overambitious, IMHO. The result is pretty decent seen from resource perspective. However not the quality product, you might expect. And the general quality isn't improving incrementally over time either: One bug being replaced by another on in the same of different area. Sometimes I get the feeling (subjective) it's getting worse: more large refactors where done, without addressing the loose ends. Ok, by some measure it might be improved: number of crashes likely less compared 5 years ago; in my perception at least.
hello @ Telesto, you are quite nicely describing a project that has become bogged down in mud and suffers from more will than skill ... There is another way, look at code by Dan Bernstein ( qmail ) or Miguel de Icaza ( midnight commander, gnumeric ) ... Or try to get access to Volker Birk's talk "Software Engineering". More manpower mostly improves confusion, better quality is the way to go ... once lost it's hard to recover ... bin-FP calculations contribute a lot.
(In reply to b. from comment #4) > hello @ Telesto, > > you are quite nicely describing a project that has become bogged down in mud > and suffers from more will than skill ... * Bogged down sure. * More will than skill. No idea, the experienced developers appear pretty skilled in my perception, but well I have no coding experience. So looks like magic anyhow :-) The number of active experienced developers - familiar with certain parts of the code - shrinking is less helpful. Lots of legacy code (without much documentation) doesn't help either. > There is another way, look at code by Dan Bernstein ( qmail ) or Miguel de > Icaza ( midnight commander, gnumeric ) ... > > Or try to get access to Volker Birk's talk "Software Engineering". Probably :-). I'm not a big reader myself. And not too familiar with the developer world. LibreOffice suite surely quite a big project with the various applications. Gnumeric is way more focused The 'can do' mentality doesn't help either. If you ask (pay) we realize scope creep. So if some company XYZ wants to use LibreOffice for simple modifications to PDF document, someone will implements a PDF import filter. The implementation might have limitations and quirks, but it's good enough for XYZ. Now people can open PDF's complains arrive regarding the limitations. Next step is a discussion of becoming a full-fledged PDF-editor... > More manpower mostly improves confusion, better quality is the way to go ... There is an optimum, sure. However there are enough area's which could use dedicated developer for quite some time. There are all sorts of independent components and layers. Import filter/Export filter for various file formats. VCL rendering; Accessibility. The various components (Writer/Calc/Draw/Impress/Base/Math). Calc/Draw/Impress/Base/Math are getting the bare minimum attention. The question is more who decides about what the developer should address (or not). There is no general direction. There is no focus/prioritization. Only more and less vocal people and people with little more power compared to others (Board Members) TDF Budget might be spend on reviving Base (niche product, and unmaintained for years), I read. Say this entails hiring a dedicated developer for 18 months; At the end focus will shift again. Base developer gets fired or re-assigned, because of other priorities (Hello Writer/Calc). So we gonna improve 'horrible shape' to ideally good. And letting it decay again to mediocre or bad, I fear. The old users of Base moved on; I guess. So we need to create the userbase for Base again? And when new users do arrive it's probably 'unsupported' again; hello bugs.. > once lost it's hard to recover ... bin-FP calculations contribute a lot. The quality issue where already present since the fork to LibreOffice, if you ask me. "bin-FP calculations contribute a lot." To cryptical for me. I suppose you're referring to Calc and something with floating points?
It looks as if ... after deleting the notes, and having writer and the document open in background for several hours! continuously stressing my fan ... the issue vanished and I'm back to normal performance, normal CPU load and silence by a slow fan. Don't blame me for reporting instable or hard to reproduce bugs, instead think to design LO and writer programming to produce stable or - better - no bugs.
It might be worth checking if any recent updates or background extensions are affecting UI responsiveness. For those multitasking with resource-heavy tools, optimizing recruitment strategies can save a lot of time — I recommend using the Recruitment Tag Calculator (https://arknightsrecruitmentcalculator.vercel.app/) to quickly filter optimal operator combinations without overloading your system
It sounds like your system might be getting bogged down due to heavy rendering tasks or memory leaks—especially common with large datasets or dynamic UI components. You might want to try optimizing background processes or reducing component re-renders. Also, if you're working on anything payroll-related and need quick calculations, this Illinois paycheck tool (https://illinoiswagecalculator.vercel.app/) could save you time by estimating figures without the overhead of running extra local scripts.
It looks as if ... since some time I'm getting more spam from this site than anything useful ... see above 2 comments. On the one hand, this is an appropriate fate for a bug tracker / community that is overwhelmed with dealing with bugs and has specialized in “managing” them for years, on the other hand, it's annoying. It would make sense if “the administrators” made the effort to put the first e.g. 5 comments of new contributors under moderator review, and simply kick out spam instead of setting it to “hidden”. Also to set those commentors into some honeypot.
happened again, Fan active, 100% CPU load by soffice.bin, ... Repagination ... same document, this time without any "Notes". :-( Is there any method to somehow diagnose what triggers this behavior? Are more than 65535 words critical? Shortly crossed that limit. In this case it lasted 3 hours of waiting until the high CPU load stopped.
Hello b, Thank you for reporting the bug. Please attach a sample document, as this makes it easier for us to verify the bug. You can create a similar document with copy paste of some random text. I have set the bug's status to 'NEEDINFO'. Please change it back to 'UNCONFIRMED' once the requested document is provided. (Please note that the attachment will be public; remove any sensitive information before attaching it. See <https://wiki.documentfoundation.org/QA/FAQ#sanitize> for help on how to do so.)
kidding? I shall change a medium sized document with 180 pages of text, using 4 different fonts in 6 different sizes, 3 color formattings, 6 background formattings, headers, footers, logos, 3 indexes, 19 pictures, 13 tables, 83 footnotes, 48 endnotes, 32 set references, unknown amounts of referrers to them, ~200 hyperlinks and ... to randomity, then hope that an instable issue stable re-appears in an instable program, attach it and then wait about a year to receive a comment asking me to check if the bug vanished by itself ... ??? no way. An alternate proposal: construct a test document "chaos.odt", define amounts for pages, images, tables, formats, ... which shall be assured safely usable in LO writer, make the document with ten times that amounts, and look if and where issues pop up. Then dig down what and why. E.g. allowed are 1000 pages, 1000 images, 1000 tables, 100 formats, 1000 notes ... make the test document with 10 000 pages, 10 000 images ... Once it works communicate the small limits to the users. Benefit: that is one time work by experts, and will lead to results, much better than 1000 of times work by users which then is ignored. best ... :-)
observation, don't know if it helps ... leaving the document open at first page overnight -> in the morning still 100% CPU load. scrolling two pages down -> CPU load reduces to normal, ~0.5% in this case.
add. observation: at some places, think about 50% of document space, the "focus marking" light grey-blue background highlighting searched or selected text, flashes on/off in short erratic sequence.
Is it possible that the slowdown is related to the activation of “Save AutoRecovery Information every:”? I had set it to 10 minutes, and the slowdowns seem to be lessened when it is deactivated. After reactivation, the program freezes for about 1 minute every 10 minutes and “repagination” occurs.
(In reply to b. from comment #15) > Is it possible that the slowdown is related to the > activation of “Save AutoRecovery Information every:”? > > I had set it to 10 minutes, and the slowdowns seem > to be lessened when it is deactivated. After > reactivation, the program freezes for about 1 minute > every 10 minutes and “repagination” occurs. Well it's possible. Although I expect some overlap. So constant layout loop in the background. Exacerbated by auto-save A performance profile would give some more insights what's going on. There multiple tools available for CPU profiling on Linux: perf_events, DTrace, SystemTap, or ktap. I have no experience with those tools.
Created attachment 202185 [details] A “flamegraph” recorded during backup in writer. Attached is a "flamegraph", my first flamegraph, recorded during two manually triggered slow backups with "repagination" in writer. With its high peak, it looks a bit unusual to me, a long chain of nested calls? And unfortunately, it contains a lot of "unknown". Perhaps experts can read some clue causes from this, or maybe someone can advise me on how to resolve the "unknowns". Local compilation with debug info? How?
add. observation: after another overnight "stay open" of two similar documents, the origin and a copy with some small edits in both of them, the origin which was the active window reproducibly saved in few seconds with repagination only once passing the screen. The copy which was open however not active overnight takes about 40 seconds for save, with one pass of repagination long stall at about 80% finished. After having tried the copy the pest re-infected the origin, now about 45 seconds for save, however also now with one pass of repagination, which before had been multiple passes. After close and re-open of writer, and opening the original document saving takes ~60 seconds and about 15 runs of repagination. Confusing.
@Ilmari Any advice how to avoid "unknowns" in a flamegraph?
(In reply to Telesto from comment #19) > @Ilmari > Any advice how to avoid "unknowns" in a flamegraph? I was just wondering about the same thing the other day. I have to ask others.
(In reply to Buovjaga from comment #20) > (In reply to Telesto from comment #19) > > @Ilmari > > Any advice how to avoid "unknowns" in a flamegraph? > > I was just wondering about the same thing the other day. I have to ask > others. I discussed it in the dev chat. In my case, this might be the answer: "we might not be passing down flags properly into 3rd party libs at all times. or its a system lib." In b.'s case, it can be a lack of symbols. There are no pre-built releases suitable for perf tracing, so one would have to do an own build with --enable-symbols and not any debug/dbgutil options.
Cruel I.) the auto-save backups in Writer also block a parallel Calc process, even if it was started independently from another terminal. Cruel II.) I tried to identify any weak or faulty areas by deleting chapter by chapter, with inconsistent results. Such as: - deleting chapter 2, savings quick, - reinserting chapter 2 -> savings again very slow, - removing it again -> savings still very slow ...
the following is widely reproducible: open program and document: "normal" CPU load, save by ctrl-s or auto save of recovery information -> high CPU load, stays high after save is completed, scrolling up some pages ( pgUp ) -> CPU load reduced to normal, save by ctrl-s or auto save of recovery information -> high CPU load, stays high after save is completed, scrolling up some pages ( pgUp ) -> CPU load reduced to normal, save by ctrl-s or auto save of recovery information -> high CPU load, stays high after save is completed, scrolling up some pages ( pgUp ) -> CPU load reduced to normal, if at start of document when saving than to reduce CPU load scroll some pages down and the up again,