I am very interested in what "AI", or better, "LLM", are capable of. It's been two years now since I started using them for translation and research purposes. But I'm still finding that they make all kinds of hard and easy to miss mistakes. Of course, I mostly notice this when I ask questions that I already know the correct answer to. Gell Mann Amnesia is live and kicking, I guess.
Anyways, to invert the trust model, this blog post suggests an alternative use of LLM, namely to use them to fact check your work. Or in this case, check the math of a science paper.
As one fun example, I read an article about a recent social media panic - an academic paper suggested that black plastic utensils could poison you because they were partially made with recycled e-waste. A compound called BDE-209 could leach from these utensils at such a high rate, the paper suggested, that it would approach the safe levels of dosage established by the EPA. A lot of people threw away their spatulas, but McGill University's Joe Schwarcz thought this didn't make sense and identified a math error where the authors incorrectly multiplied the dosage of BDE-209 by a factor of 10 on the seventh page of the article - an error missed by the paper's authors and peer reviewers. I was curious if o1 could spot this error. So, from my phone, I pasted in the text of the PDF and typed: "carefully check the math in this paper." That was it. o1 spotted the error immediately (other AI models did not).
Even if it is unwanted or unethical to have an LLM write your paper, you can still make heavy use of it: simply ask it to read along and fact/math check your work.
No comments:
Post a Comment