Dr Y.L. Chan
Dr Chan is actively involved in professional activities. In particular, Dr Chan serves as an Associate Editor of IEEE Trans. on Image Processing, which is the best journal in the field of image and video processing. He was the Secretary of the 2010 IEEE Int. Conference on Image Processing (ICIP 2010), which is the IEEE sponsored flagship conference in the image processing area. He was also the Publications Chair of the IEEE Int. Conference on Multimedia and Expo (ICME 2017), the Technical Program Co-Chair of 2014 Int. Conference on Digital Signal Processing (DSP 2014), etc. Many of his research papers appear in the world’s top-level international journals, such as IEEE Trans. on Image Processing, IEEE Trans. on Circuits and Systems for Video Technology and IEEE Trans. on Multimedia. His research interests include multimedia technologies, image and video compression, video transcoding, digital TV/HDTV, 3DTV/3DV, Multiview plus depth coding, machine learning for video coding, and future video coding standards including Versatile Video Coding (VVC), screen content coding, light-field video coding, and 360-degree omnidirectional video coding.
Research highlights:
The current research focus of Dr Chan is Content-Aware Video Coding. Dr Chan and his group has witnessed the rapid developments of various video coding technologies in the past decade. Recently, many different types of new videos attract both academic and industry researchers into new application domains. It includes screen content video, light field video, 360 degree video, etc. The group’s present and future compression achievements gear towards different directions as shown in the below figure. The state-of-the-art video coding standards treat each video equally, and are content unaware. To handle a wide variety of different video characteristics, different coding units (CUs) within the same frame may be coded with different modes and be partitioned into different sub-blocks, which results in a huge number of different modes for handling different types of videos. The group believes that various types of video will have different characteristics. The idea is to adaptively exploit the features in various types of video by machine learning such that the coding framework can make very fast decision with negligible quality loss. It means that the group’s coding framework is now content aware. In the recent development, the group successfully designed online-learning-based Bayesian Decision Rule for fast intra mode and CU partitioning algorithm in HEVC screen content coding. The BDBR and complexity performance are now the best in the literature. The group then uses the decision tree, random forest and deep learning for SCC mode decision resulting in further BDBR improvement. These works are the first few researches using the learning approaches for coding the new screen content video. For other new video formats such as light-field and 360 degree videos, the group can further extent this concept to these application domains. Up to now, there is no work in these areas using the learning approach for coding these types of video. Dr Chan and his group believe that their current research is heading to a direction where fruitful results could be achieved in the development of the next generation video coding.