top of page

ChatGPT Blog Post Series Part 3 – Chatting with ChatGPT: Evidence for Scientific Writing


Imagine if you could write a scientific paper in minutes, without doing any research or typing a single word. Sounds too good to be true, right? Well, that's what ChatGPT seems to have the capability to do. As soon as it became available, ChatGPT took academia by storm raising a lot of eyebrows on what it may mean for the future of academic writing. Is ChatGPT a useful tool that can help students and researchers improve their academic writing skills and productivity? Or is it a dangerous technology that can degrade the quality and integrity of scholarly work?


As a curious and analytic mind, I set out to experiment and research the current evidence on this topic myself. Thanks to the version that is free to use by anyone with internet access, most of us have already discovered that ChatGPT can answer questions, generate a list of publications in a specific topic, summarize literature, and even create tables and graphics. But… Is there any evidence it actually works and what is the scientific community’s stance on this? What could be a better first step than asking ChapGPT itself?


In this post, I will summarize my chat with ChatGPT about current scientific evidence for using ChatGPT to write academic papers.





The response from ChatGPT to my question about the current scientific evidence for using ChatGPT to write academic papers was very reasonable. It started with acknowledging the cutoff date for its current dataset in September 2021. Then it listed the major limitations for ChatGPT:

  • Lack of domain-specific knowledge

  • Quality control

  • Ethical considerations

  • Lack of critical thinking and analysis

  • Limited interaction capabilities


My next question to ChatGPT was to generate a list of publications about using ChatGPT to write academic papers. It again mentioned the knowledge cutoff date in 2021 and the fact that there weren’t any publications about using ChatGPT to write academic papers. It instead provided a list of papers discussing language models like ChatGPT. I double checked each of these. It turned out that only one of the four were completely accurate. The second was correct except for the publication date and google search for the other two publications didn’t return any relevant results. This points out to one of the most important considerations to keep in mind when using ChatGPT. We all have been warned by ChatGPT after all: Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts.





Next, I looked into current literature about ChatGPT. If the results from my search for “ChatGPT” in PubMed yielding nearly twice as many papers compared to a month ago is any indication, there is no question that ChatGPT is continuing to engage the scientific world with enthusiastic contributions from both proponents and opponents. In the next blog post in the series, I will explore current literature on the implications of using ChatGPT for academic writing and having ChatGPT as a co-author on academic papers.



5 views0 comments

Comments


bottom of page