IEEE COMPUTER SOCIETY · TECHNICALLY CO-SPONSORED
IEEE.org IEEE CS IEEE Xplore
Track
Track 03 — NLP for Finance
Session
Session 2B · Friday 14 November · 14:00–15:30
DOI
10.1109/BIFE.2024.005
Status
Accepted · IEEE Xplore Pending

Abstract

We benchmark four Chinese-capable language models — ChatGLM-3, Qwen-2, BERT-wwm-Chinese, and FinBERT-CN — for sentence-level sentiment extraction on a hand-labelled corpus of 14,000 earnings-call transcripts from Shanghai and Shenzhen-listed firms (2018-2024). Qwen-2 with chain-of-thought prompting achieves the highest F1 (0.847) on a five-class sentiment task. Downstream, sentiment-weighted portfolios constructed from Qwen-extracted signals earn a CAPM-adjusted alpha of 6.3% annualised over 2022-2023.

Index Terms

Large Language ModelsSentiment AnalysisEarnings CallChatGLMQwen

How to Cite

Yuxin Li, Cheng Ma, Tao Zhang, "Large Language Models for Earnings-Call Sentiment Extraction: Comparing ChatGLM, Qwen and BERT on Chinese Listed Firms," in Proc. 17th IEEE International Conference on Business Intelligence and Financial Engineering (BIFE 2024), Hangzhou, China, Nov. 14-16, 2024, pp. 33-40, doi: 10.1109/BIFE.2024.005.
← All Papers Download PDF (Forthcoming) BibTeX