AP19678995 – Development of a speaker recognition method using deep neural networks with ultra-short duration of pure speech

Objective of the project:

The aim of the project is to investigate the possibility of implementing and training deep neural networks to identify speakers using ultrashort phrases when standard statistical methods do not work.

Relevance:

The proposed project is aimed at investigating the effectiveness of the use of deep neural networks in the development of voice identification systems based on ultrashort phrases, the duration of which in its pure form does not exceed a few seconds. The relevance of these studies is based on the fact that the methods of speaker recognition used today are mainly focused on building a statistical model of the speaker's voice, where Gaussian mixed models, i-vectors, etc. are used. However, as practice shows, in real life there is often a situation when it is necessary to identify a person by his short phrases. It is clear that it is virtually impossible to construct a statistical digital voice model from an ultrashort utterance of a person. Thus, we are faced with the problem of creating a voice model of the speaker that does not require long utterances (with a duration of more than 15 seconds in its pure form). Based on this, we set the task to conduct research and develop algorithms for constructing a human voice model when traditional statistical methods are not applicable.

Scientific adviser: PhD, professor, Akhmediyarova Ainur Tanatarovna

 Quantitative and qualitative composition of project performers: 8 performers, consisting of: 5 PhD, 1 Master

The results obtained: start of the project – 2023 y.

Term of realization: 2023-2025 yy. 

Scientific projects of the university

Back to top

An error has occurred!

Try to fill in the fields correctly.

An error has occurred!

Exceeded maximum file size limit.

Your data was successfully sent!

We will contact you shortly.

Your data was successfully sent!

A confirmation email was sent to your e-mail address. Please do not forget to confirm your e-mail address.

Translation unavailable


Go to main page