You are viewing a single comment's thread from:

RE: LeoThread 2025-10-18 14-48

in LeoFinance2 months ago

Part 9/12:

The team used the Llama 3 chat model due to its open-source availability and compliance with regulatory constraints (such as data residency requirements). The models were fine-tuned efficiently using parametric methods like Low-Rank Adaptation (LoRA), reducing computational costs without sacrificing performance.

Training involved a 9:1 data split between training and validation, with hardware setups—comprising four GPUs and the AdamW optimizer—configured for optimal efficiency.


Results Demonstrate Promising Outcomes

The fine-tuned models, evaluated through metrics like F1-score and Matthews Correlation Coefficient (MCC), outperformed generalized models and even rule-based systems across several key categories: