Quality
Estimation for H.264/SVC Spatial Scalability based on a New Quantization
Distortion Model ABSTRACT Scalable Video Coding
(SVC) provides efficient compression for the video bitstream
equipped with various scalable configurations. H.264 scalable extension
(H.264/SVC) is the most recent scalable coding standard. It involves
state-of-the-art inter-layer prediction to provide higher coding efficiency
than previous standards. Moreover, the requirements for the video quality on
distinct situations like link conditions or video contents are usually
different. Therefore, how to efficiently provide suitable video quality to
users under different situations is an important issue. This
work proposes a Quantization-Distortion (Q-D) model for H.264/SVC spatial
scalability to estimate video quality before real encoding is performed. We
introduce the residual decomposition for three inter-layer prediction types:
residual prediction, intra prediction, and motion prediction. The residual
can be decomposed to previous distortion and prior-residual that can be
estimated before encoding. For single layer, they are distortion of previous
frame and difference between two original frames. Then, the distortion can be
modeled as a function of quantization step and prior-residual. In
simulations, the proposed model can estimate the actual Q-D curves for each
inter-layer prediction, and the accuracy of the model is up to 94.98%. |