Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ExplainableVQA/demo_maxvqa.py #9

Open
cyy-1234 opened this issue May 13, 2024 · 0 comments
Open

ExplainableVQA/demo_maxvqa.py #9

cyy-1234 opened this issue May 13, 2024 · 0 comments

Comments

@cyy-1234
Copy link

Hi, contributor,
I recently read the article Towards Explainable In-the-Wild Video Quality Assessment: A Database and a Language-Prompted Approach. Please let me know. I tried a video of my own, and the score felt like it was between 0-100. The values ​​in the paper then correspond to the paper's "Figure 4: Qualitative studies on different specific factors, with a good video (>0.6) and a bad video (<-0.6) in each dimension of Maxwell; [A-5] Trajectory, [ T-5]Flicker, and [T-8] Fluency are focusing on temporal variations and example videos for them are appended in supplementary package. Zoom in for details.", in my example what counts as good and what counts as bad Yes, looking forward to your reply

image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant