Sort by
Refine Your Search
-
Large vision–language models (LVLMs) can describe driving scenes and support decisions [Li25], but they sometimes hallucinate objects, relations, or events that are not present [Liu24,Liu25]. In a
-
they sometimes hallucinate objects, relations, or events that are not present [Liu24,Liu25]. In a safety-critical domain, reducing hallucinations and improving robustness and trustworthiness are essential
Searches related to bayesian object
Enter an email to receive alerts for bayesian-object positions