ChatGPT was the first of what are now many generative AI chatbots that produce coherent written responses and computer code in response to a user’s natural language prompts and adapt their responses based on user feedback or additional prompts. Many are available for free (usually with user registration), but some of the most advanced tools, such as GPT-4, require a paid subscription.
These tools have already been used to produce a wide range of essays, exam solutions, letters, fiction, blog posts, and many other outputs at various levels of quality and accuracy. Materials are usually clearly structured and grammatically correct. They can also generate code and fix coding bugs. They can tackle certain problems sets. And these tools can produce summaries of readings, including materials from PDFs.
While there is a lot generative AI can do, it also has some varied and changing limitations. These tools periodically generate what have come to be called “hallucinations:” completely made up information that looks plausible. Some, but not all, of the tools struggle with referencing sources. Some of the tools might produce citations that look correct but that do not refer to an actual article or book. Some may present information without any way to verify where it comes from or its validity. Similarly, generative AI can produce harmful, biased, misleading or simply inaccurate content. All of this means that knowing a field is helpful in order to use generative AI most effectively.
Additionally, many tools do not have knowledge generated after the dates of their training (2021 for ChatGPT; 2023 for Claude). Some do and some do not have access to the internet.
If you want to understand better how these chatbots work, The New York Times has a series called How to Become an Expert on A.I. that can help.