Evaluating Language Model Agency through Negotiations - Systèmes intelligents pour les données, les connaissances et les humains
Communication Dans Un Congrès Année : 2024

Evaluating Language Model Agency through Negotiations

Résumé

We introduce an approach to evaluate language model (LM) agency using negotiation games. This approach better reflects real-world use cases and addresses some of the shortcomings of alternative LM benchmarks. Negotiation games enable us to study multi-turn, and cross-model interactions, modulate complexity, and side-step accidental evaluation data leakage. We use our approach to test six widely used and publicly accessible LMs, evaluating performance and alignment in both self-play and cross-play settings. Noteworthy findings include: (i) only closed-source models tested here were able to complete these tasks; (ii) cooperative bargaining games proved to be most challenging to the models; and (iii) even the most powerful models sometimes "lose" to weaker opponents
Fichier principal
Vignette du fichier
2401.04536v2.pdf (2.49 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04701366 , version 1 (18-09-2024)

Identifiants

Citer

Tim R. Davidson, Veniamin Veselovsky, Martin Josifoski, Maxime Peyrard, Antoine Bosselut, et al.. Evaluating Language Model Agency through Negotiations. ICLR 2024, May 2024, Vienna (Austria), Austria. ⟨hal-04701366⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More