After some impressive demonstrations of its abilities in the past year, AI garnered the optimism and excitement that comes with shiny new technologies. Paul McCartney even used it to write one more Beatles song, 50 years after they broke up.
And then the mistakes began to pile up. While optimistic about its abilities, Morgan Stanley is approaching ChatGPT with caution. “When we talk of high-accuracy tasks, it is worth mentioning that ChatGPT sometimes hallucinates and can generate answers that are seemingly convincing, but are actually wrong,” wrote Shawn Kim, in a mid-February note.
Take the case of the New York lawyer who relied on it to write a brief for him. He was delighted when it supplied him with case law that fit his case like a glove and presented that brief—which featured fake cases and invented legal citations—without proper scrutiny to the court. The decision landed him and a second lawyer from the firm in legal hot water: they were fined $5000, plus major career embarrassment. He’ll be carrying the nickname “ChatGPT lawyer” for some time and the firm could still face discipline from New York State’s bar association.
The bottom line? Use ChatGPT for business communications with care and discernment. Some organizations don’t allow its use at all, but if yours does, here’s how to use ChatGPT responsibly and for the most effective communication.
Be patient: treat it like an intern
Learn its strengths and weaknesses, says Wharton professor Ethan Mollick. When it makes a mistake, point it out and ask it to do better.
Be skilled: give it good-quality prompts
Garbage in, garbage out, the saying goes, and it’s especially true with Chat GPT. Some of the best prompts include these phrases:
Be circumspect: don’t include confidential information or data
Any information you supply becomes part of the database it draws from, so keep the confidential information safe by leaving it out of prompts.
Be skeptical
Chatbots produce “hallucinations” like the fake case law described above: in the AI sense, a hallucination is when a large-language model (LLM, as ChatGPT is) produces false information—and ChatGPT does it so often that New York University journalism professor Charles Seife called it “a computer program that would be sociopathic if it were alive” in Slate magazine. “Even when it’s not supposed to, even when it has a way out, even when the truth is known to the computer and it’s easier to spit it out rather than fabricate something—the computer still lies.”
If the results from your ChatGPT sound too good to be true, they probably are. Do what good journalists do: fact-check the information it supplies.
Be iterative: keep refining your prompts to create a better version
ChatGPT learns from what you know and the writing becomes collaborative as it pushes you to create the best version of your document. Don’t treat a ChatGPT prompt like a one-and-done Google search. Do let it produce a plain language version of a report on a complex topic and make the corrections needed to keep it factually correct. A healthy back-and-forth exchange might help you improve your own writing skills in the process.
Be selective so your personality shines through
ChatGPT often produces bland, boring prose and corrects precisely the sort of wording that makes writing sound distinctively human. Zadie Smith called that quality the “watermark of the self” and that’s one thing an AI can’t provide: the beautiful idiosyncrasies of the human imagination.