A Text-to-speech application is a synthesizer software that converts text into spoken word, by analyzing and processing the text using Natural Language Processing (NLP) and then using Digital Signal Processing (DSP) technology to convert this processed text into synthesized speech representation of the text. The vision impaired students makes a lot of mistake during typing because they just assume whatever there are typing is right, this lead to the development of Text-To-Speech application. Here, we developed a useful text-to-speech synthesizer in the form of a simple application that converts inputted text into synthesized speech and reads out to the user which can then be saved as an mp3.file. Observation and interview were deployed to gather information used in developing this material. The development of a text to speech synthesizer will be of great help to people with visual impairment and make making through large volume of text easier.
As our society farther expands, there have been many supports for second class citizens, disabled. One of many supports that is urgent is the guarantee of mobility for blind people. There has been many efforts but even now, it is not easy for blind people to independently move. As electronic technologies have improved, a research about Electrical Aided: EA for blind people has started. With a current product, Human Tech of Japan developed Navigation for blind people, using GPS and cell phone. This system is consisted of cell phone of the user (blind people), a subminiature of GPS receiver, a magnetic direction sensor, a control unit and speech synthesis equipment with PC of base station.
Text-To-Speech has been available for decades (since 1939). Unfortunately, quality of the output-especially in terms of naturalness-has historically been sub-optimal. Terms such as “robotic” have been used to describe synthetic speech. Recently, the overall quality of Text-To-Speech from some vendors has dramatically improved. Quality is now evident not only in the remarkable naturalness of inflection and intonation, but also in the ability to process text such as numbers, abbreviations and addresses in the appropriate context.
Text-to-speech (TTS) is a type of speech synthesis application that is used to create a spoken sound version of the text in a computer document, such as a help file or a Web page. TTS can enable the reading of computer display information for the visually challenged person, or may simply be used to augment the reading of a text message. Current TTS applications include voice-enabled e-mail and spoken prompts in voice response systems.
1.2 BACKGROUND OF THE STUDY
Long before electronic signal processing was invented, there were those who tried to build machines to create human speech. Some early legends of the existence of “Brazen Heads” involved Pope Silvester II (d. 1003 AD), Albertus Magnus (11981280), and Roger Bacon (12141294). In 1779, the Danish scientist Christian Kratzenstein, working at the Russian Academy of Sciences, built models of the human vocal tract that could produce the five long vowel sounds (in International Phonetic Alphabet notation, they are [aː], [eː], [iː], [oː] and [uː]). This was followed by the bellows-operated “acoustic-mechanical speech machine” by Wolfgang von Kempelen of Pressburg, Hungary, described in a 1791 paper. This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. According to Charles (1857), Wheatstone produced a “speaking machine” based on von Kempelen’s design, and in 1857, M. Faber built the “Euphonia”. Wheatstone’s design was resurrected in 1923 by Paget.
In the 1930s, Bell Labs developed the vocoder, which automatically analyzed speech into its fundamental tone and resonances. From his work on the vocoder, Homer Dudley developed a keyboard-operated voice synthesizer called The Voder (Voice Demonstrator), which he exhibited at the 1939 New York World’s Fair. The Pattern playback was built by Dr. Franklin S. Cooper and his colleagues at Haskins Laboratories in the late 1940s and completed in 1950. There were several different versions of this hardware device but only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device, Allen J (2007) were able to discover acoustic cues for the perception of phonetic segments (consonants and vowels).
1.3 STATEMENT OF THE PROBLEM
The challenge that is picked up that lead to this piece of project work is that the blind find it not easy to know exactly word there are typing even though they know the key board very well, still they just assume that there are correct. At the end of the day they will find themselves making a lot of mistake in their typing works. This lead to the development of this project, Text to Speech Application.
1.4 OBJECTIVE OF THE STUDY
The main objective of this project is to create an application that will convent text to speech in order for the visually impaired student to know exactly what they are typing and presenting in the computer system. The visually impaired student will be well assured of what they are typing and know are to correct their mistake if any typographical error is their work.
1.5 SCOPE OF THE STUDY
The scope of this research work converts text into spoken word, by analyzing and processing the text using Natural Language Processing (NLP) and then using Digital Signal Processing (DSP) technology to convert this processed text into synthesized speech representation of the text.
1.6 SIGNIFICANCE OF THE STUDY
The significance of this project work is serve as a helping tools for the vision impaired students, therefore, this goes a long way by creating a text to speech synthesis application. The blind student will use the software to voice out what they have type.
1.7 LIMITATION OF THE STUDY
The limitations encounter in this research work includes:
|Account Name||Obiaks Business Venture|