Impact of emotions on fundamental speech signal frequency

Loading...
Thumbnail Image

Downloads

1

Date issued

Journal Title

Journal ISSN

Volume Title

Publisher

WSEAS Press

Location

Není ve fondu ÚK

Signature

Abstract

The paper deals with recognition of speeches made in a particular emotional state and examines the impact of person’s emotional state on the fundamental speech signal frequency. Vocal chords create audio signals which carry information coded with human language. This process is called human speech. Based on a speech signal several speaker’s attributes such as sex, age, speech disorders (stuttering or cluttering) and emotional state can be determined. As for emotions, only about 10% of speaker’s emotional state or state of mind is expressed by means of speech. On that ground, a selection and a computation of suitable parameters is an important part of a system designed to determine emotions from speech signals. These parameters should be as relevant as possible in relation to the speaker’s emotional state. The fundamental signal frequency is one of the speech parameters. We dealt with a method for extracting the fundamental speech signal frequency by means of a central clipping and its exploitation for the system classifying the speaker’s emotional state.

Description

Subject(s)

fundamental frequency, emotions, pitch extraction, energy, Zerro Crossing Ratio, ANOVA

Citation

ANDERSON, D., Hung-Jen Yang a Pavel Varacha. Latest trends in information technology : proceedings of the 1st WSEAS International Conference on Information Technology and Computer Networks (ITCN '12) : proceedings of the 1st WSEAS International Conference on Cloud Computing (CLC '12) : proceedings of the 1st WSEAS International Conference on Programming Languages and Compilers (PRLC '12). [Austria] : WSEAS Press, c2012. s. 409-414. Recent advances in computer engineering series, 7.

Collections