Bug 83153 - Implement UAA framework to support speech recognition and dictation support
Summary: Implement UAA framework to support speech recognition and dictation support
Alias: None
Product: LibreOffice
Classification: Unclassified
Component: LibreOffice (show other bugs)
(earliest affected)
Inherited From OOo
Hardware: Other All
: medium enhancement
Assignee: Not Assigned
Whiteboard: BSA speechRecognition
Keywords: accessibility
: 146652 (view as bug list)
Depends on: 81759
Blocks: a11y, Accessibility
  Show dependency treegraph
Reported: 2014-08-27 15:56 UTC by goldfish
Modified: 2023-06-07 16:42 UTC (History)
5 users (show)

See Also:
Crash report or crash signature:


Note You need to log in before you can comment on or make changes to this bug.
Description goldfish 2014-08-27 15:56:43 UTC
Problem description: No Text To Speech or Speech To Text

Steps to reproduce: NA
1. ....
2. ....
3. ....

Current behavior: NA

Expected behavior: Should allow dictations & should read out selected stuff on giving that command.

Please can you have Text To Speech & Speech To Text as well. Google's Text To Speech & Speech To Text Engine may be used I guess (with their permission of course) with an extension or something.

This will help dictate notes & letters. Similarly it can read out what the document says.

Maybe, even voice commands like: "LO, select all"

Operating System: All
Version: Master
Comment 1 goldfish 2014-08-27 16:05:44 UTC
Some Android Apps have Text to Speech & Some have Speech to Text.
Comment 2 goldfish 2014-08-27 16:08:08 UTC
This would be a KILLER  feature making all those bogus complaints of an "ugly" user interface *obsolete*.
Comment 3 tommy27 2014-08-29 14:45:06 UTC
enhancement request. this feature has never been present in OOo and LibO.
status NEW. version: inherited from OOo
Comment 4 V Stuart Foote 2014-12-03 16:25:41 UTC
An enhancement clearly beyond project scope. Believe this should be set resolved wontfix--or possibly converted to a Meta.

Each OS requires its own framework for speech recognition.  That framework would then have to interact with the UNO Accessibility API via a native accessibility bridge (ATK, IAccessible2, or NS Accessibility).

Marginal support already under OSX with VoiceOver and Dictation (aka SIRI).

ATK support of the Orca project does not implement speech recognition--so minimal GNOME-Voice-control or Simon listens

And as Microsoft went its own way with UIA (dropping Text Services Framework) there is nothing integrating IAccessible2 to UIA based Windows Speech Recognition.

Unfortunately, as there is no "standard" for speech recognition, there is nothing for the project to implement.

Development would of necessity be external to the project. Where, open source projects such as Dragonfly (Python), or Sphinx (now Java implemented) offer some promise of providing speech recognition framework--but at most integration would be by external API interface, and not into the core. Just as Nuance's Dragon NS had done in the past.

Improvements to UNO Accessibility API and the native bridges are of course in scope to accommodate implementation.
Comment 5 Robinson Tryon (qubit) 2015-12-18 09:31:25 UTC Comment hidden (obsolete)
Comment 6 V Stuart Foote 2017-06-01 20:06:05 UTC
*** Bug 108271 has been marked as a duplicate of this bug. ***
Comment 7 V Stuart Foote 2022-01-08 14:18:35 UTC
*** Bug 146652 has been marked as a duplicate of this bug. ***