严勤
  • 单位:河海大学   计算机与信息学院
  • 职称:教授

研究领域

Signal and Information Processing

Image Processing and Analysis

Machine Learning

论文与著作

[1] LuanDong, Qin Yan,Fengling Zheng. Robust graticule intersection localization for rotatedtopographic maps. Machine Vision and Applications, 2019, 30: 737-747.

[2] Luan Dong, Fengling Zheng, HongxiaChang, Qin Yan.Corner points localization in electronic topographic maps with deep neuralnetworks. Earth Science Informatics, 2017.

[3]   Luan Dong, Qin Yan,Yong Lv, Shuyu Deng. Full band watermarking in DCT domain with Weibull model. MultimediaTools & Applications, 2016, 76(2): 1982-2000.

[4] Qin Yan, Saeed Vaseghi. Modeling andSynthesis English Accents with Pitch and Duration Correlates. Computer Speechand Language, 2010, 24(4): 711-725.

[5] Qin Yan, Cong Ding, Jingjing Yin, Yong Lv.Improving music auto-tagging with trigger-based context model. IEEEInternational Conference on Acoustics, Speech and Signal Processing (ICASSP2015), Apr. 19-24,Brisbane,Australia.

[6] Qin Yan, Saeed Vaseghi, EsfandiarZavarehei , Ben Milner, Jonathan Darch, Paul White Ioannis Andrianakis,” KalmanTracking Of Linear Predictor And Harmonic Noise Models for Noisy SpeechEnhancement ” , Jan, Computer Speech and Language (2008).

[7] Qin Yan, Saeed Vaseghi, EsfandiarZavarehei , Ben Milner, Jonathan Darch, Paul White Ioannis Andrianakis,“Formant-Tracking Linear Prediction Model Using HMMs and Kalman Filters fornoisy Speech Processing”, Computer Speech and Language, Volume 21 , Issue 3 ,pp.: 543-561 ISSN:0885-2308 (2007).

[8]   E. Zavarehei, S. Vaseghi, Q. Yan,“Noisy Speech Enhancement Using Harmonic-Noise Model And Codebook-BasedPost-Processing”,Vol15, issue 4, IEEE Transactions on Speech and AudioProcessing (2007).

[9]   Qin Yan, Saeed Vaseghi DimitriosRentzos, Ching-Hsiang Ho “Analysis, Modelling and Synthesis of Formant Spacesof British, Australian and American English Accents” IEEE Transactions onSpeech and Audio Processing, Volume 15, Issue 2, pp:676 - 689 (2007).

[10]EsfandiarZavarehei, Saeed Vaseghi, Qin Yan “Inter-frame modeling of DFT trajectories ofspeech and noise for speech enhancement using Kalman filter” SpeechCommunication 48,pp1545-1555(2006).

荣誉与奖励

[1] Project leader of ITU-T universal sound detection factor (VAD) project

[2] Project leader of mobile communication coding sub project of China National Mobile Audio Visual (AVS-M) coding standard project

[3] Project leader of integrating noise and unified speech models for communication on mobile devices in noisy environments (The Engineering and Physical Sciences Research Council, UK)

[4] Project leader of modelling voice, accent and emotion for speaker-adaptive text to speech aynthesis (The Engineering and Physical Sciences Research Council, UK)

[5] Research Director sponsored by the National Natural Science Foundation of China (NSFC)