Visual Quality Assessment Group @ NTU

Visual Quality Assessment Group @ NTU

Maintained by: Teo Wu (Haoning Wu)

🌟New: We have published a benchmark for multi-modality large language models (MLLMs) on low-level vision and visual quality assessment!

qbench

See our πŸ–₯️codebase and πŸ“‘paper!

We are a young research team from Nanyang Technological University (NTU) and Sensetime Research, aiming to build efficient and explainable Image and Video Quality Assessment approaches as well as exploring the perceptual mechanisms behind the human quality perception.

Selected Research Projects

Code repositories to our works under the project:

ExplainableVQA

evqa

MaxVQA and MaxWell database (ACM MM, 2023) Paper
  • An Extended 16-dimensional Video Quality Assessment Database
  • Query Multi-Axis Dimension Specific Quality via Language (CLIP)
  • Try our demo at Github

DOVER (Aesthetic VQA)

dover

DOVER (ICCV, 2023) Paper PWC
  • State-of-the-art Method for Non-reference Video Quality Assessment
  • First Multi-dimensional Dataset for Non-reference Video Quality Assessment
  • See our repository at Github

FAST-VQA (end-to-end!)

fastvqa_head

FAST-VQA/FasterVQA ECCV-2022 TPAMI, 2023
  • The first end-to-end Video Quality Asssessment method family!
  • Super Efficient, real-time even on Apple M1 CPU!
  • See our weights/code/toolbox at Github

Contact

Contact Teo Wu (Haoning Wu, haoning001@e.ntu.edu.sg or realtimothyhwu@gmail.com) for inquiries related to code and datasets.