Description: Memory usage opening and scrolling a DOCX increased from 600 to 1100 MB; still 500 MB in use after close Steps to Reproduce: 1. Open the attached file 2. Scroll down until and using page down Actual Results: 1250 MB memory usage 360 after close Expected Results: 570 tops with 6.0 and 227 after close Reproducible: Always User Profile Reset: No Additional Info: Version: 7.1.0.0.alpha0+ (x64) Build ID: 52820b52b3bca45e2db527d1cc5f4488b2e0b9d0 CPU threads: 4; OS: Windows 6.3 Build 9600; UI render: Skia/Raster; VCL: win Locale: nl-NL (nl_NL); UI: en-US Calc: CL
Created attachment 163356 [details] Example file
Use multipage view to speed the scrolling up a bit
And you may repeat 1/2 with a running instance.. will go up every time Second run 1450 MB tops and 572 MB after close
Created attachment 163489 [details] Example file 2 1. Open the attached file 2. Go to multi page view scroll down to bottom -> 600 MB tops with 6.3 1040 MB with 7.1 master. After some waiting cache gets release to 400 MB with 6.3 and 780 MB with master. Even after close stuff is sticky
(In reply to Telesto from comment #4) > Created attachment 163489 [details] > Example file 2 > > 1. Open the attached file > 2. Go to multi page view scroll down to bottom -> 600 MB tops with 6.3 1040 > MB with 7.1 master. After some waiting cache gets release to 400 MB with 6.3 (x86) > and 780 MB (x64) with master. Even after close stuff is sticky And 341 with 6.2 (x86)
Dear Telesto, Could you please try to reproduce it with a master build from http://dev-builds.libreoffice.org/daily/master/ ? You can install it alongside the standard version. I have set the bug's status to 'NEEDINFO'. Please change it back to 'UNCONFIRMED' if the bug is still present in the master build
I bibisected with linux-64-7.0. There were two bumps in memory use. I only checked the last one and it is caused by https://git.libreoffice.org/core/commit/828504974d70111e4a35b31d579cf42fe660a660 tdf#130768 speedup huge pixel graphics Cairo so I guess it is just the use of caching to speed things up. Cairo there explains why I don't see the memory bump with Linux gen backend. Let's close as everything seems to be working according to plan.
(In reply to Buovjaga from comment #7) > I bibisected with linux-64-7.0. There were two bumps in memory use. I only > checked the last one and it is caused by > https://git.libreoffice.org/core/commit/ > 828504974d70111e4a35b31d579cf42fe660a660 > tdf#130768 speedup huge pixel graphics Cairo > > so I guess it is just the use of caching to speed things up. Cairo there > explains why I don't see the memory bump with Linux gen backend. > > Let's close as everything seems to be working according to plan. Caching and caching.. whole debate about that at bug 138068. Cairo one but apparently broken (or I read it that way). And Skia might do another round with an independent caching system. Another topic is. Is this related to DOCX filter or not [have to check, don't remember]. As the filter tends to convert stuff.. So there is plenty of room/ 'potential' to have unintended memory issue with images. That's the part worrying me. At obviously clear that even developers struggle; else the would have seen bug 138068 from miles away (there is patch in gerrit, no clue what that will do)
O and there is the whole lazy loading images thing (Miklos) and buffering caching (Tomaz 6.0/6.1; image cache rewrite; losing images)). Aside from the rendering cache at Skia (Lubos 7.0/7.1)) and the Cairo cache (Armin 7.0)). So lots of developers working on different area's all related to images And the unique pointer/ shared pointer stuff of Noel (regularly positive). But those are all done in the assumption other area's working as expected.. And not totally sure if that's the case.. O well, could go one, cache increasing of Meeks (somewhere in 5.2). The lazy loading/unloading obviously hard to grasp with some timer stuff. And nice to have whole 500 pages of images loaded into memory to have quick access. but also be resource hogging. I scrolled through the whole document.. (page down).. And you have to deal with 900 MB simply because of that..
@Lubos This again involves the lovely: tdf#130768 speedup huge pixel graphics Cairo Not sure if this is solved by your patch indirectly. However in my opinion does the patch not deliver (speed maybe, but at pretty big cost). And based on the whole discussion bug 138068 not totally sure how polished the patch of Armin is. So intended consequences or kind of additional fall-out. (In reply to Buovjaga from comment #7) > I bibisected with linux-64-7.0. There were two bumps in memory use. I only > checked the last one and it is caused by > https://git.libreoffice.org/core/commit/ > 828504974d70111e4a35b31d579cf42fe660a660 > tdf#130768 speedup huge pixel graphics Cairo > > so I guess it is just the use of caching to speed things up. Cairo there > explains why I don't see the memory bump with Linux gen backend. > > Let's close as everything seems to be working according to plan.
I've run the steps to reproduce in Heaptrack and as far as I can tell there's no significant memory leak. The problem appears to be that we do a huge number of small allocations, which means glibc has to allocate a lot of system memory, then when freeing that memory gets fragmented, so there's apparently no good way to return it to the system. I don't think there's a reasonably simple solution to this.
(In reply to Luboš Luňák from comment #11) > I've run the steps to reproduce in Heaptrack and as far as I can tell > there's no significant memory leak. The problem appears to be that we do a > huge number of small allocations, which means glibc has to allocate a lot of > system memory, then when freeing that memory gets fragmented, so there's > apparently no good way to return it to the system. I don't think there's a > reasonably simple solution to this. Write it down in the do list somewhere.. Or let Armin think about it for a while. Assuming the bibisect being correct and assuming this not being present previously In any case keep in somewhere in the collective developer memory.. that this not being 'optimal'. This likely will become a problem someday in some context
O I forgot, thanks for analyzing