How to Fine-Tune a Model for Persuasion Detection

Apr 18, 2022 | Educational

In this article, we will guide you through the process of fine-tuning a language model, specifically roberta-large, to detect persuasion intent related to donation behaviors using the Persuasion For Good Dataset. The aim of this fine-tuning task is to predict whether a persuadee intends to donate or not based solely on their utterances.

Understanding the Dataset

The dataset comprises dialogues between persuaders and persuadees. Our focus is only on the persuadee’s side of the conversation. Each dialogue consists of multiple utterances, separated by the special token “s”. Here’s a breakdown of our task:

  • Input: The concatenation of persuadee utterances.
  • Label: A binary classification:
    • 0: The persuadee does not intend to donate.
    • 1: The persuadee intends to donate.
Input: sHow are you?sCan you tell me more about the charity?s...sSure, I'll donate a dollar.s...s
Label: 1
Input: sHow are you?sCan you tell me more about the charity?s...sI am not interested.s...s
Label: 0

Data Information

Here’s a brief overview of the data distribution used for training and validation:

  • Training set: 587 dialogues, utilizing real donation outcomes as labels.
  • Validation set: 141 dialogues with labels derived from the Persuasion For Good AnnSet.
  • Test set: 143 dialogues also using manual donation intention labels from the same source.

Model Training Configuration

For fine-tuning our model, we considered the following training specifics:

  • Loss Function: We employed CrossEntropy with class weights of 1.5447 for class 0 and 0.7393 for class 1. This adjustment compensates for the imbalance in classes.
  • Early Stopping: Monitoring was conducted, and the model checkpoint with the highest validation macro F1 score was selected. This occurred at step 35.

Model Testing

Upon completion of the training phase, we evaluated the model’s performance using a test set which provided us with the following metrics:

  • Test Macro F1 Score: 0.893
  • Test Accuracy: 0.902

Troubleshooting

If you encounter any issues during the fine-tuning process, consider the following suggestions:

  • Ensure that your dataset is properly formatted. Any issues with delimiters or token inconsistencies may affect training outcomes.
  • If your model isn’t converging, consider adjusting the learning rate or increasing the number of training epochs.
  • Monitor your validation metrics closely; this can provide insights into overfitting or underfitting behavior.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

Fine-tuning a model for persuasion detection can significantly enhance your ability to understand behavioral intentions in dialogues. By employing robust training metrics and addressing discrepancies during the training process, you can achieve impressive results. At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox