Treffer: 嵌入注意力机制并结合层级上下文的语音情感识别.
Weitere Informationen
A chllenging task remains with regarding to speech emotion recognition due to issues such as emotional corpus problems,association between emotion and acoustic features,and speech emotion recognition modeling. Conventional context-based speech emotion recognition system risks of losing the context details of the label layer and neglecting the difference of the two-level due to solely limited to the feature layer. This paper proposed a Bidirectional Long Short-Term Memory (BLSTM) network with embedded attention mechanism combined with hierarchical context learning model. The model completed the speech emotion recognition task in three phases. The first phase extracted the feature set from the emotional speech,then used the SVM-RFE feature-sorting algorithm to reduce the feature in order to obtain the optimal feature subset and assigned attention weights. The second phase, the weighted feature subset was input into the BLSTM network learning feature layer context to obtain the initial emotional prediction result. The third phase used the emotional value to train another independent BLSTM network for learning label layer context information. According to the information,the final prediction was completed based on the output result of the second phase. The model embedded the attention mechanism to automatically learn to adjust the attention to the input feature subset, introduced the label layer context to associate the feature layer context so as to achieve the hierarchical context information fusion and improve the robustness,and improved the models ability to model the emotional speech. The experimental results on the SEMAINE and RECOLA datasets showed that both RMSE and CCC were significantly improved than the baseline model. [ABSTRACT FROM AUTHOR]
由于情感语料问题、情感与声学特征之间关联问题、语音情感识别建模问题等因素,语音情感识别一直充满挑战性. 针对传统基于上下文的语音情感识别系统仅局限于特征层造成标签层上下文细节丢失以及两层级差异性被忽略的缺陷,本 文提出嵌入注意力机制并结合层级上下文学习的双向长短时记忆(BLSTM)网络模型.模型分3个阶段完成语音情感识别任 务,第1阶段提取情感语音特征全集后采用SVM-RFE特征排序算法降维得到最优特征子集,并对其进行注意力加权;第2阶 段将加权后的特征子集输入BLSTM网络学习特征层上下文获得最初情感预测结果;第3阶段利用情感标签值对另一独立 BLSTM网络训练学习标签层上下文信息并据此在第2阶段输出结果基础上完成最终预测.模型嵌入注意力机制使其自动学 习调整对输入特征子集的关注度,引入标签层上下文使其联合特征层上下文实现层级上下文信息融合提高鲁棒性,提升了模 型对情感语音的建模能力,在SEMAINE和RECOLA数据集上实验结果表明:与基线模型相比RMSE和CCC均得到较好改善. [ABSTRACT FROM AUTHOR]