In different LO modules, it is sometimes the case that we have a large, partially-transparent object appearing before a smaller object. The smaller object can typically only be selected by surrounding it with a selection rectangle; and sometimes even that is quite challenging. It would be nice if LO interpreted relevant gestures as an intent to select the further-backwards object. Specifically, if you keep your mouse mostly static, and repeat a mouse selection action, relatively slowly, on a point where the backwards object is also present. After a few long and/or repeated left-mouse-button presses - which are extremely unlikely to occur accidentally/unintentionally - LO could figure out what's going on and choose a selected object which is not the fore one.
Interesting idea, cycle object selection with long mouse-press. Would supplement current <Tab> / <Shift>+<Tab> of kb movement (and then selection) between sd objects--they already cycle. Or equally, by mouse click in the SB Navigator deck (which shows the object selection(s). Guess it would depend on ability to implement a long hold cross-platform and per DE. If it can't be provided consistently, then probably a => WF as we already enable precise selection via kb and the SB Navigator.
(In reply to V Stuart Foote from comment #1) > Would supplement current <Tab> / <Shift>+<Tab> of kb movement (and then > selection) between sd objects--they already cycle. That is possible, but is a switch from the mouse to the keyboard; this suggestion would enable mouse-based selection. Also, can be difficult if you have a lot of elements on the slide / page. > Or equally, by mouse click in the SB Navigator deck (which shows the object > selection(s). Well, yes, that is possible; but - the user doesn't know which object to select, they would need to cycle through all objects of a possibly-relevant type; and objects don't have meaningful names. > Guess it would depend on ability to implement a long hold cross-platform and > per DE. If it can't be provided consistently, then probably a => WF Such provision could be part of the necessary work...
(In reply to Eyal Rozenberg from comment #2) > > Or equally, by mouse click in the SB Navigator deck (which shows the object > > selection(s). > > Well, yes, that is possible; but - the user doesn't know which object to > select, they would need to cycle through all objects of a possibly-relevant > type; and objects don't have meaningful names. > And "Eve" users will name their objects as they need, SB Navigator now shows them all by default now including generic object names (used to be just user named objects, bug 34828). Also, rework of the Navigator deck provides an on mouse-over "flash" of the associated object on the Writer page. While SB Navigator in sd (Draw, Impress) selection will expose the object grabs (no mouse-over); and for sc (Calc) the Navigator is more limited double click selection with drag behavior, while manipulation is directly on the object's context.
(In reply to V Stuart Foote from comment #3) > And "Eve" users will name their objects as they need, No, she likely won't. Because it is not worth the effort to name objects that way. > SB Navigator now shows > them all by default now including generic object names (used to be just user > named objects, bug 34828). Also, rework of the Navigator deck provides an on > mouse-over "flash" of the associated object on the Writer page. That's a good point, that would be possible. Still, you're in the middle of working on something, your mouse is just "right there" and you want to get at the object under it, which you're seeing. It's at your mouse-tip - and yet so far away. Do you really want to have to let it go, move to the side of the window, open the sidebar, and start going over all the objects in the slide, waiting for the right one to blink? It's a hassle. I would like to see a more localized catering to that user desire.
Me votes against introducing gestures in general. If an interaction is not familiar for users on the OS/DE it will always be a hidden-gem thing. Plus, gestures are hard-coded and do not allow customization. Last but not least it's a no-go for a11y. In particular we allow to click through the z-order of objects per alt+left click, which is much faster, more common for vector drawing tools, and works well on middle levels. With 'middle level' I mean a stack of objects where you start on top with the long press gesture, get to the second layer, and then you have to click and hold in order to go further down. But this initial click is usually selecting the top-most object.
(In reply to Heiko Tietze from comment #5) > Me votes against introducing gestures in general. If an interaction is not > familiar for users on the OS/DE it will always be a hidden-gem thing. Isn't it, though? I mean, if someone holds down the mouse for a long time, that means they're expecting something to happen, doesn't it? > Plus,gestures are hard-coded and do not allow customization. True, but that's no different from, say, one-mouse-click vs double-click for activating a button. > Last but not least it's a no-go for a11y. On the contrary, this _helps_ accessibility - for people who can use the mouse but not a keyboard; and for interaction using a touch-screen. > In particular we allow to click through the z-order of objects per alt+left > click, which is much faster, more common for vector drawing tools, and works > well on middle levels. Oh, that's interesting, I was not aware of that feature! Anyway, I still think it's worthwhile to do this, since there is some benefit and IMHO no detriment effectively. But given that there are workarounds, I agree this is not a must-have.
(In reply to Eyal Rozenberg from comment #6) > True, but that's no different from, say, one-mouse-click vs double-click for > activating a button. The long press gesture was introduced for touch-only devices to simulate a right click. You may also ask for click-move gestures similar to swipe navigation.
(In reply to Heiko Tietze from comment #7) > (In reply to Eyal Rozenberg from comment #6) > > True, but that's no different from, say, one-mouse-click vs double-click for > > activating a button. > The long press gesture was introduced for touch-only devices to simulate a > right click. You may also ask for click-move gestures similar to swipe > navigation. I see... so, it actually won't be appropriate for a touch device at all, only for devices with multi-button pointers. :-(