Seminar anouncement: Perceptual Modeling and Processing of Multimedia Signals
There are many aspects of motivation for enabling machines perceive as humans do toward multimedia signals (for vision, audio, smell, touch, and their combinations). Firstly, most multimedia signals we manipulate are for human consumption; secondly, human perception of signals is effective and efficient so machines that emulate human functioning have technical advantages; thirdly, there is an increasing need for harmonious human-machine interaction (e.g., in smart cities), so it is desirable to build machines which can perceive and think alike. So far, we have been able to build machines that perform significantly better and quicker than our arms and legs. However, when it comes to modeling human perception, the odyssey proves to be much more difficult.
In this talk, the related multi-disciplinary problems in perceptual signal processing will be introduced. The basic computational models (e.g., visual attention, just-noticeable difference, and perceptual signal quality metrics) will be discussed. Afterward, different perceptually-inspired signal processing techniques will be presented for signal acquisition, enhancement, communication, retrieval/search, adaptation and understanding. Much of the materials will be drawn upon the substantial experience in relevant academic and industrial projects. The last part of the talk discusses future research and development possibilities, including those enabled by the emerging big data and cloud media; up to now, most work has been carried out for visual signals (image, video, graphics, animation, screen content, etc.) and some audio cases, it is expected that more research will be performed in other media as well (e.g., smell, touch, and even taste, and more importantly, truly multimedia).