My research focuses on advancing language models toward achieving a nuanced, soft understanding of language. This involves developing methods to significantly mitigate model hallucination and enhance the overall reliability of generative outputs. Furthermore, my work extends to improving the rigorous evaluation of model faithfulness, specifically by closing the gap between outputs that are merely plausible and those that are truly grounded in the source material.
Recent Publications
Ad hoc conventions generalize to new referents
Anya Ji, C. Bergey, Ron Eliav, Yoav Artzi, Robert D. Hawkins
under-review, 2025
CLATTER: Comprehensive Entailment Reasoning for Hallucination Detection
Ron Eliav, Arie Cattan, Eran Hirsch, Shahaf Bassan, Elias Stengel-Eskin, Mohit Bansal, Ido Dagan
under-review, 2025
Explain Yourself, Briefly! Self-Explaining Neural Networks with Concise Sufficient Reasons
Shahaf Bassan, Shlomit Gur, Ron Eliav
ICLR, 2025
Self-Explaining Neural Networks for Business Process Monitoring
Shahaf Bassan, Shlomit Gur, Sergey Zeltyn, Konstantinos Mavrogiorgos, Ron Eliav, D. Kyriazis
ICSBT, 2025
Semantic uncertainty guides the extension of conventions to new referents
Ron Eliav, Anya Ji, Yoav Artzi, Robert D. Hawkins
CogSci, 2023
View all