No. 44 - Fine-tuning large language models for financial markets via ontological reasoning

Vai alla versione italiana Site Search

by Teodoro Baldazzi, Luigi Bellomarini, Stefano Ceri, Andrea Colombo, Andrea Gentili and Emanuel SallingerJanuary 2024

Large Language Models (LLMs) usually undergo a pre-training process on extensive collections of generic textual data, which are often publicly accessible. Pre-training enables LLMs to grasp language grammar, understand context, and convey a sense of common knowledge. Pre-training can be likened to machine learning training: the LLM is trained to predict the next basic text unit (e.g., a word or a sequence of words) based on the sequence of previously observed units. However, despite the impressive generalization and human-like interaction capabilities shown in Natural Language Processing (NLP) tasks, pre-trained LLMs exhibit significant limitations and provide poor accuracy when applied in specialized domains. Their main limitation stems from the fact that data used in generic pre-training often lacks knowledge related to the specific domain. To address these limitations, fine-tuning techniques are often employed to refine pre-trained models using domain-specific data. Factual information is extracted from company databases to create text collections for fine-tuning purposes. However, even in this case, results tend to be unsatisfactory in complex domains, such as financial markets and finance in general.

Examining the issue from a different perspective, the Knowledge Representation and Reasoning (KRR) community has focused on producing formalisms, methods, and systems for representing complex Enterprise Knowledge. In particular, Enterprise Knowledge Graphs (EKGs) can leverage a combination of factual information in databases and business knowledge specified in a compact and formal fashion. EKGs serve the purpose of answering specific domain queries through established techniques such as ontological reasoning. Domain knowledge is represented in symbolic forms, e.g., logic-based languages, and used to draw consequential conclusions from the available data. However, while EKGs are applied successfully in many financial scenarios, they lack flexibility, common sense and linguistic orientation, essential for NLP.

This paper proposes an approach aimed at enhancing the utility of LLMs for specific applications, such as those related to financial markets. The approach involves guiding the fine-tuning process of LLMs through ontological reasoning on EKGs. In particular, we exploit the Vadalog system and its language, a state-of-the-art automated reasoning framework, to synthesize an extensive finetuning corpus from a logical formalization of domain knowledge in an EKG. Our contribution consists of a technique called verbalization, which transforms the set of inferences determined by ontological reasoning into a corpus for fine-tuning. We present a complete software architecture that applies verbalization to four NLP tasks: question answering, i.e., providing accurate responses in a specific domain in good prose; explanation, i.e., systematically justifying the conclusions drawn; translation, i.e., converting domain specifications into logical formalization; and description, i.e., explaining formal specifications in prose. We apply the approach and our architecture in the context of financial markets, presenting a proof of concept that highlights their advantages.