基於參數編碼之虛擬音源定位系統

在這個3D 虛擬實境多媒體應用普及的時代裡,雖然應用上大多以3D 視訊為主,但若加入3D 音訊能讓使用者有更豐富的環境體驗感。因此本論文針對如第一人稱射擊遊戲 (First Person Shooter; FPS)等,這些3D 的應用上提出一套基於物件資訊的音源定位混音系統(Object-based Audio Rendering System; OARS),令使用者能夠辨別靜態物件的音源位置與動態音源的變化。

本系統考慮到如果這些應用需要連結上網路時,位元率的減少便是相當重要的環節。因此,系統架構上可分為物件音訊的分析及合成兩部分:在分析端,為了減少位元率的大幅增加,利用參數編碼技術及參數產生器生成空間參數,如物件間或聲道間的時間差、強度差;在合成端則是利用空間參數將物件的音訊合成具有空間資訊的多聲道訊號。

我們由頻譜觀察經系統處理後的音訊變化及經修正的ITU-R 7級制(+3~-3)之主觀音訊品質評量標準評量其環繞效果,在靜態音訊測驗中平均可達到約+1.49 分,動態音訊的方向移動效果測驗平均可達到約+1.31 分。

 

An Object-based Audio Rendering System based on Parametric Coding

ABSTRACT

Nowadays the multimedia applications of the 3D virtual reality are more and more popular. Although most applications focus on 3D video, the combination of 3D video and audio processing can enrich the experience of users. In this thesis, we propose an object-based audio rendering system (OARS) for 3D applications, such as first person shooter (FPS) games. With the proposed system, users are able to locate the objects, whether it is static or in motion.

Since the audio objects may be in remote sites that are connected over Internet in many applications, the bitrate reduction is still critical. In this work, the system consists of the audio analysis part and synthesis part. In the audio analysis part, we utilize the parametric coding technique to generate spatial parameters, which include the time difference and the intensity difference for an object and loudspeakers, for rate reduction while keeping the spatial information. In the audio synthesis part, we reconstruct multi-channel audio outputs by integrating an audio signal and the spatial parameters.

We evaluate the system performance by analyzing the spectrum of processed audios and subjective listening tests. Based on the modified ITU-R seven-grade (-3 to 3) subjective quality evaluation, our proposed system scores 1.49 on  average for static audio objects and 1.31 for moving objects.