Generally, training an llm is a bad way to provide it with information. “In-context learning” is probably what you’re looking for. Basically just pasting relevant info and documents into your prompt.
You might try fine tuning an existing model on a large dataset of legalese, but then it’ll be more likely to generate responses that sound like legalese, which defeats the purpose
TL;DR Use in context learning to provide information to an LLM Use training and fine tuning to change how the language the llm generates sounds.
It’s just polarizing. You’re just making people more staunch in their beliefs or just annoying people who would rather not deal with aggression (like myself)
If your goal is to drive people away and make a space where everyone just agrees with you all the time then it’s effective.