Speaker: Mr. Aviv Peleg, B.Sc., M.B.A.
AI Consultant | Head of Data & AI, Noovox
MBA Program, Lecturer | Technion & University of Haifa
Abstract: Large language models and AI culture have entered our lives in recent years, made waves, and become a significant influence. Research on how people use language-model tools reveals recurrent patterns that effectively automate existing human biases. Well known biases among developers and decision-makers in software are reflected in the models we consume. Users can further amplify these biases through ad hoc use and by shaping narratives that favor their ideology or preferences. This creates layered bias, and scholars in language and behavioral economics are working on remedies. We also see progress in "LLM as a Judge", with evidence that models can substitute for human annotators, yet label-based biases emerge when models evaluate outputs from other models or their own. The risks are real, but forthcoming regulation, especially in the European Union, is expected to set standards that improve consistency and quality in model outputs.
About the speaker: Aviv Peleg serves as Head of Data and AI at Noovox, leading automation and applied AI initiatives that modernize business operations. With over a decade of experience across SMBs and international companies, he has designed and implemented end-to-end solutions that streamline processes and deliver measurable impact. He lectures on "Large Language Models and Biases in Social Media" at the Technion and the University of Haifa. Aviv holds a BSc in Industrial Engineering and an MBA, both from the Technion.