eSpeak

eSpeakNG is a compact, open-source, software speech synthesizer for Linux, Windows, and other platforms. It uses a formant synthesis method, providing many languages in a small size. Much of the programming for eSpeakNG's language support is done using rule files with feedback from native speakers.

eSpeakNG
Original author(s)Jonathan Duddington
Developer(s)Reece Dunn
Initial releaseFebruary 2006 (2006-02)
Stable release
1.50 / 30 October 2020 (2020-10-30)
Repositorygithub.com/espeak-ng/espeak-ng/
Written inC
Operating systemLinux
Windows
macOS
FreeBSD
TypeSpeech synthesizer
LicenseGPLv3
Websitegithub.com/espeak-ng/espeak-ng/

Because of its small size and many languages, it is included as the default speech synthesizer in the NVDA [1] open source screen reader for Windows, as well as Android,[2] Ubuntu[3] and other Linux distributions. Its predecessor eSpeak was recommended by Microsoft in 2016[4] and was used by Google Translate for 27 languages in 2010;[5] 17 of these were subsequently replaced by commercial voices.[6]

The quality of the language voices varies greatly. In eSpeakNG's predecessor eSpeak, the initial versions of some languages were based on information found on Wikipedia.[7] Some languages have had more work or feedback from native speakers than others. Most of the people who have helped to improve the various languages are blind users of text-to-speech.

History

In 1995, Jonathan Duddington released the Speak speech synthesizer for RISC OS computers supporting British English.[8] On 17 February 2006, Speak 1.05 was released under the GPLv2 license, initially for Linux, with a Windows SAPI 5 version added in January 2007.[9] Development on Speak continued until version 1.14, when it was renamed to eSpeak.

Development of eSpeak continued from 1.16 (there was not a 1.15 release)[9] with the addition of an eSpeakEdit program for editing and building the eSpeak voice data. These were only available as separate source and binary downloads up to eSpeak 1.24. The 1.24.02 version of eSpeak was the first version of eSpeak to be version controlled using subversion,[10] with separate source and binary downloads made available on Sourceforge.[9] From eSpeak 1.27, eSpeak was updated to use the GPLv3 license.[11] The last official eSpeak release was 1.48.04 for Windows and Linux, 1.47.06 for RISC OS and 1.45.04 for macOS.[12] The last development release of eSpeak was 1.48.15 on 16 April 2015.[13]

eSpeak uses the Usenet scheme to represent phonemes with ASCII characters.[14]

eSpeak NG

On 25 June 2010,[15] Reece Dunn started a fork of eSpeak on GitHub using the 1.43.46 release. This started off as an effort to make it easier to build eSpeak on Linux and other POSIX platforms.

On 4 October 2015 (6 months after the 1.48.15 release of eSpeak), this fork started diverging more significantly from the original eSpeak.[16][17]

On 8 December 2015, there were discussions on the eSpeak mailing list about the lack of activity from Jonathan Duddington over the previous 8 months from the last eSpeak development release. This evolved into discussions of continuing development of eSpeak in Jonathan's absence.[18][19] The result of this was the creation of the espeak-ng (Next Generation) fork, using the GitHub version of eSpeak as the basis for future development.

On 11 December 2015, the espeak-ng fork was started.[20] The first release of espeak-ng was 1.49.0 on 10 September 2016,[21] containing significant code cleanup, bug fixes, and language updates.

Features

eSpeakNG can be used as a command-line program, or as a shared library.

It supports Speech Synthesis Markup Language (SSML).

Language voices are identified by the language's ISO 639-1 code. They can be modified by "voice variants". These are text files which can change characteristics such as pitch range, add effects such as echo, whisper and croaky voice, or make systematic adjustments to formant frequencies to change the sound of the voice. For example, "af" is the Afrikaans voice. "af+f2" is the Afrikaans voice modified with the "f2" voice variant which changes the formants and the pitch range to give a female sound.

eSpeakNG uses an ASCII representation of phoneme names which is loosely based on the Usenet system.

Phonetic representations can be included within text input by including them within double square-brackets. For example: espeak-ng -v en "Hello [[w3:ld]]" will say Hello world in English.

Synthesis method

eSpeakNG can be used as text-to-speech translator in different ways, depending on which text-to-speech translation step user want to use.

1. step — text to phoneme translation

There are many languages (notably English) which don't have straightforward one-to-one rules between writing and pronunciation; therefore, the first step in text-to-speech generation has to be text-to-phoneme translation.

  1. input text is translated into pronunciation phonemes (e.g. input text xerox is translated into zi@r0ks for pronunciation).
  2. pronunciation phonemes are synthesized into sound e.g., zi@r0ks is voiced as zi@r0ks in monotone way

To add intonation for speech i.e. prosody data are necessary (e.g. stress of syllable, falling or rising pitch of basic frequency, pause, etc.) and other information, which allows to synthesize more human, non-monotonous speech. E.g. in eSpeakNG format stressed syllable is added using apostrophe: z'i@r0ks which provides more natural speech: z'i@r0ks with intonation

For comparison two samples with and without prosody data:

  1. [[DIs Iz m0noUntoUn spi:tS]] is spelled in monotone way
  2. [[DIs Iz 'Int@n,eItI2d sp'i:tS]] is spelled intonated way

If eSpeakNG is used for generation of prosody data only, then prosody data can be used as input for MBROLA diphone voices.

2. step — sound synthesis from prosody data

The eSpeakNG provides two different types of formant speech synthesis using its two different approaches. With its own eSpeakNG synthesizer and a Klatt synthesizer:[22]

  1. The eSpeakNG synthesizer creates voiced speech sounds such as vowels and sonorant consonants by additive synthesis adding together sine waves to make the total sound. Unvoiced consonants e.g. /s/ are made by playing recorded sounds,[23] because they are rich in harmonics, which makes additive synthesis less effective. Voiced consonants such as /z/ are made by mixing a synthesized voiced sound with a recorded sample of unvoiced sound.
  2. The Klatt synthesizer mostly uses the same formant data as the eSpeakNG synthesizer. But, it also produces sounds by subtractive synthesis by starting with generated noise, which is rich in harmonics, and then applying digital filters and enveloping to filter out necessary frequency spectrum and sound envelope for particular consonant (s, t, k) or sonorant (l, m, n) sound.

For the MBROLA voices, eSpeakNG converts the text to phonemes and associated pitch contours. It passes this to the MBROLA program using the PHO file format, capturing the audio created in output by MBROLA. That audio is then handled by eSpeakNG.

Languages

eSpeakNG performs text-to-speech synthesis for the following languages:[24][25]

  1. Abaza
  2. Afar
  3. Afrikaans[26]
  4. Albanian[27]
  5. Amharic
  6. Ancient Greek
  7. Arabic1
  8. Aragonese[28]
  9. Armenian (Eastern Armenian)
  10. Armenian (Western Armenian)
  11. Assamese
  12. Azerbaijani
  13. Bashkir
  14. Basque
  15. Basic English
  16. Belarusian
  17. Bengali
  18. Bhojpuri
  19. Bishnupriya Manipuri
  20. Bosnian
  21. Bulgarian[28]
  22. Breton
  23. Burmese
  24. Cantonese[28]
  25. Catalan[28]
  26. Cebuano
  27. Cherokee
  28. Chichewa
  29. Chinese (Mandarin)
  30. Corsican
  31. Croatian[28]
  32. Czech
  33. Chuvash
  34. Church Slavonic
  35. Danish[28]
  36. Dutch[28]
  37. Dzongkha
  38. English (American)[28]
  39. English (British)
  40. English (Caribbean)
  41. English (Lancastrian)
  42. English (Received Pronunciation)
  43. English (Scottish)
  44. English (West Midlands)
  45. Esperanto[28]
  46. Estonian[28]
  47. Finnish[28]
  48. Filipino
  49. French (Belgian)[28]
  50. French (France)
  51. French (Swiss)
  52. Frisian
  53. Galician
  54. Georgian[28]
  55. German[28]
  56. Greek (Modern)[28]
  57. Greenlandic
  58. Guarani
  59. Gujarati
  60. Hakka Chinese
  61. Haitian Creole
  62. Hausa
  63. Hawaiian
  64. Hebrew
  65. High Valyrian
  66. Hindi[28]
  67. Hmong
  68. Hungarian[28]
  69. Icelandic[28]
  70. Igbo
  71. Indonesian[28]
  72. Ido
  73. Interlingua
  74. Interlingue
  75. Irish[28]
  76. Italian[28]
  77. Japanese3[29]
  78. Kannada[28]
  79. Kazakh
  80. Khmer
  81. Klingon
  82. Kʼicheʼ
  83. Kirundi
  84. Kinyarwanda
  85. Konkani[30]
  86. Korean
  87. Kurdish[28]
  88. Kyrgyz
  89. Quechua
  90. Ladakhi
  91. Lao
  92. Latin
  93. Ladino
  94. Latgalian
  95. Latvian[28]
  96. Lang Belta
  97. Lingua Franca Nova
  98. Lepcha
  99. Limbu
  100. Lithuanian
  101. Lojban[28]
  102. Luxembourgish
  103. Macedonian
  104. Maithili
  105. Malagasy
  106. Malay[28]
  107. Malayalam[28]
  108. Maltese
  109. Māori
  110. Marathi,[28]
  111. Mongolian
  112. Nahuatl (Classical)
  113. Navajo
  114. Nepali[28]
  115. Norwegian (Bokmål)[28]
  116. Northern Sotho
  117. Nogai
  118. Odia
  119. Oromo
  120. Occtian
  121. Papiamento
  122. Palauan
  123. Pashto
  124. Persian[28]
  125. Persian (Latin alphabet)2
  126. Polish[28]
  127. Portuguese (Brazilian)[28]
  128. Portuguese (Portugal)
  129. Punjabi[31]
  130. Pyash (a constructed language)
  131. Romanian[28]
  132. Russian[28]
  133. Russian (Latvia)
  134. Samoan
  135. Sanskrit
  136. Scottish Gaelic
  137. Serbian[28]
  138. Shan (Tai Yai),
  139. Sharda
  140. Sesotho
  141. Shona
  142. Sindhi
  143. Sinhala
  144. Slovak[28]
  145. Slovenian
  146. Somali
  147. Spanish (Spain)[28]
  148. Spanish (Latin American)
  149. Swahili[26]
  150. Swedish[28]
  151. Tajik
  152. Tamil[28]
  153. Tatar
  154. Telugu
  155. Tibetan
  156. Tswana
  157. Thai
  158. Turkmen
  159. Turkish[28]
  160. Tatar
  161. Uyghur
  162. Ukrainian
  163. Urdu
  164. Uzbek
  165. Vietnamese (Central Vietnamese)[28]
  166. Vietnamese (Northern Vietnamese)
  167. Vietnamese (Southern Vietnamese)
  168. Volapük
  169. Welsh
  170. Wolof
  171. Xhosa
  172. Yiddish
  173. Yoruba
  174. Zulu
  1. Currently, only fully diacritized Arabic is supported.
  2. Persian written using English (Latin) characters.
  3. Currently, only Hiragana and Katakana are supported.

See also

References

  1. Switch to eSpeak NG in NVDA distribution #5651
  2. eSpeak TTS for Android
  3. espeak-ng package in Ubuntu
  4. https://support.office.com/en-us/article/download-voices-for-immersive-reader-read-mode-and-read-aloud-4c83a8d8-7486-42f7-8e46-2b0fdf753130
  5. Google blog, Giving a voice to more languages on Google Translate, May 2010
  6. Google blog, Listen to us now, December 2010.
  7. eSpeak Speech Synthesizer 3. LANGUAGES
  8. http://espeak.sourceforge.net/
  9. https://sourceforge.net/projects/espeak/files/espeak/
  10. Subversion history (revision 1)
  11. Subversion history (revision 56)
  12. http://espeak.sourceforge.net/download.html
  13. http://espeak.sourceforge.net/test/latest.html
  14. van Leussen, Jan-Wilem; Tromp, Maarten (26 July 2007). "Latin to Speech": 6. CiteSeerX 10.1.1.396.7811. Cite journal requires |journal= (help)
  15. https://github.com/rhdunn/espeak/commit/63daaecefccde34b700bd909d23c6dd2cac06e20
  16. https://github.com/rhdunn/espeak/commit/61522a12a38453a4e854fd9c9e0994ad80420243
  17. https://github.com/nvaccess/nvda/issues/5651#issuecomment-170288487
  18. Taking ownership of the eSpeak project and its future
  19. Vote for new main eSpeak developer
  20. Rebrand the espeak program to espeak-ng.
  21. espeak-ng 1.49.0
  22. Dennis H. Klatt (1979). "Software for a cascade/parallel formant synthesizer" (PDF). J. Acoustical Society of America, 67(3) March 1980.
  23. List of recorded fricatives in eSpeakNG
  24. https://github.com/espeak-ng/espeak-ng/blob/master/docs/languages.md
  25. https://github.com/espeak-ng/espeak-ng/blob/master/CHANGELOG.md
  26. Butgereit, L., & Botha, A. (2009, May). Hadeda: The noisy way to practice spelling vocabulary using a cell phone. In The IST-Africa 2009 Conference, Kampala, Uganda.
  27. Hamiti, M., & Kastrati, R. (2014). Adapting eSpeak for converting text into speech in Albanian. International Journal of Computer Science Issues (IJCSI), 11(4), 21.
  28. Kayte, S., & Gawali, D. B. (2015). Marathi Speech Synthesis: A review. International Journal on Recent and Innovation Trends in Computing and Communication, 3(6), 3708-3711.
  29. Pronk, R. (2013). Adding Japanese language synthesis support to the eSpeak system. University of Amsterdam.
  30. Mohanan, S., Salkar, S., Naik, G., Dessai, N. F., & Naik, S. (2012). Text Reader for Konkani Language. Automation and Autonomous System, 4(8), 409-414.
  31. Kaur, R., & Sharma, D. (2016). An Improved System for Converting Text into Speech for Punjabi Language using eSpeak. International Research Journal of Engineering and Technology, 3(4), 500-504.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.