일시 : 2004년 3월 29일 오후 2:30-4:00
장소 : KAIST 정문술빌딩 217호
주제 : Learning and Analyzing Auditory Scenes with Probabilistic Graphical Models
연사 : Te-Won Lee, Institute for Neural Computation, University of California, San Diego
The problem of analyzing sound signals captured in auditory scenes has attracted much attention from many different viewpoints. Auditory scenes contain multiple sources that often emit sounds simultaneously. Sensors located on the scene capture the sounds as they propagate away from the sources. This talk focuses on algorithms for processing the sensor signals to extract specific information about the auditory scene, the sound sources it contains, and the signals they emit. Our approach is based on the framework of probabilistic graphical models, developed in the field of machine learning, and leverages on learning and reasoning with those models. First, we present machine learning algorithms using graphical models for speech signal representation. Learning efficient codes for speech signals in a linear generative model allows us to analyze important speech features and their characteristics to model different sounds, individual speaker characteristics or classes of speakers. Then, we use this principle to derive a method for solving the difficult problem of separating multiple sources given only a single channel microphone recording. Multi-channel observations can relax some of the constraints in blind source separation. However, this problem now includes reverberations, sensor noise and other real environment challenges. We demonstrate solutions that can separate speech signals from mixture recordings. Finally, we present ideas on how to extend from the source separation methods to auditory scene analysis within the graphical model framework. We discuss what computational challenges and approximate solutions exist.
2004 세계뇌주간(World Brain Awareness Week)
- 행사 일정 : 2004년 3월 15(월)-21일(일)
- 자세한 내용 :
2004 세계뇌주간 초청장
뇌과학연구센터에서 다음과 같이 세미나를 개최합니다.
관심있는 분들의 많은 참여 부탁드립니다.
일시 : 2004년 1월 28일(수) 오후 4:00-5:30
장소 : KAIST 정문술빌딩 217호
발표제목 : Cue-guided Search : A Computational Model of Selective Attention
연사 : Lee, KangWoo(Department of Informatics, School of Science and Technology,
Univ. of Sussex, Falmer UK)
Selective visual attention in a natural environment can be seen as the interaction between the external visual stimulus and task specific knowledge of the required behavior. This interaction between the bottom-up stimulus and the top-down, task-related knowledge is crucial for what is selected in space and time within the scene. In this paper we propose a computational model for selective attention for a visual search task. We go beyond simple saliency based attention models to model selective attention guided by top-down visual cues, which are dynamically integrated with the bottom-up information. In this way, selection of a location is accomplished by interaction between bottom-up and top-down information.
First, the general structure of our model is briefly introduced and followed by a description of the top-down processing of task-relevant cues. This is then followed by a description of the processing of the external images to give three feature maps that are combined to give an overall bottom-up map. Second, the development of the formalism for our novel Interactive Spiking Neural Network (ISNN) is given, with the interactive activation rule that calculates the integration map. The learning rules for both bottom-up and top-down weight parameters are given, together with some further analysis of the properties of the resulting ISNN. Third, the model is applied to a face detection task to search for the location of a specific face that is cued. The results show that the trajectories of attention are dramatically changed by interaction of information and variations of cues, giving an appropriate, task-relevant search pattern. Finally, we discuss ways in which these results can be seen as compatible with existing psychological evidence.
ⓒ Copyright 1999~ TECHNOTE-TOP / TECHNOTE.INC,
Copyright (c) Brain Science Research Center. All right reserved