To provide clarity on the errors that occurred and to emphasize that they do not affect the scientific evidence or opinions presented in my declaration, I will delve into more specifics about my expertise, the context of producing the declaration, my research and drafting workflow, and the role of AI tools in this process.
In mid-October, I was approached by Defendants’ counsel to contribute an expert declaration. I agreed to do so as part of my private consulting work. The declaration focused on the psychological and social implications of deepfakes, particularly in relation to credibility, their dissemination online, their impact on trust in the media, and the efficacy of countermeasures. These topics fall squarely within my area of expertise. I have authored over 15 studies on AI and communication since 2017, including a seminal piece on AI-Mediated Communication that has garnered over 400 citations, making it the most cited paper in this emerging field. Additionally, I co-edited the inaugural special issue on the social implications of deepfakes in a prestigious peer-reviewed journal dedicated to exploring the effects of emerging technologies on society. My research also includes extensive work on misinformation, including its psychological underpinnings, prevalence, and potential solutions, as well as a study on misinformation in virtual reality.
As a researcher and professor specializing in AI, my current work is centered on the challenges posed by deepfakes. The landscape has evolved significantly with the introduction of ChatGPT in November 2022, accelerating the development of tools for creating deepfakes. Since co-editing the special issue on deepfakes in 2021, there has been considerable interest in this area, evident from the over 140 citations to our article within a short timeframe, reflecting a high citation rate in the social sciences. Following the release of ChatGPT, I have published five peer-reviewed papers on the impact of AI on trustworthiness and communication. Being a co-founder of a prominent journal focusing on online trust and safety, I am constantly engaged in reviewing the latest scientific literature at the intersection of AI and society, along with teaching a graduate course on language and technology that explores AI and communication.
My workflow for the declaration encompassed three main phases: (a) surveying the literature, (b) analyzing the scientific evidence, and (c) drafting the declaration. To begin, I conducted a thorough review of the literature on deepfakes using tools like Google Scholar and GPT-4o to identify relevant articles and merge existing knowledge with new scholarship. Google Scholar facilitated searching across various disciplines for scholarly publications, while GPT-4o, a generative AI tool, assisted in tasks such as summarization, analysis, and drafting. In the analysis phase, I utilized GPT-4o to summarize key articles, identify emerging themes, and ascertain any new research questions. This tool also aided in producing an initial list of references for citation in the declaration.
The errors in citations occurred during the drafting phase, prompting me to provide a detailed account of my process for transparency and clarification. During the drafting phase, I divided the process into two parts: substance and citations. Starting with the substance, I outlined the main sections of the declaration in MS Word and then detailed the key points for each section. I utilized Google Scholar and GPT-4o to assist in this process. The two citation errors, known as “hallucinations,” likely occurred due to my use of GPT-4o, which I used as a drafting tool. These errors occurred when I pasted bullet points into GPT-4o without intending to create citations. This mistake led to the insertion of incorrect citations in my declaration.
When GPT-4o provided answers, I pasted them into my declaration and edited it extensively. In the instances where I had noted to add citations later, GPT-4o automatically generated citations, leading to the inclusion of incorrect references. I failed to notice these errors as the placeholders I had included were deleted. This was solely my oversight, and I apologize for the confusion it has caused.
In the final part of the drafting phase, I asked GPT-4o to create an APA format reference list using the in-text citations. I did not cross-check this list with my usual reference software, which could have caught the errors. The incorrect citations were included in the reference list, further perpetuating the mistake.
Upon reevaluation, I found the incorrect citation in paragraph 21 and corrected it to cite Vaccari & Chadwick (2020). This paper supports the point made in paragraph 21 about the difficulty in distinguishing between real and manipulated content. Additionally, other studies such as Köbis et al. (2021) and Weikmann et al. (2024) further validate this claim.
The citation error in paragraph 19 was also identified and rectified to cite Hancock & Bailenson (2021), a paper that supports the assertion about the believability of deepfake videos. I acknowledge these mistakes and apologize for any confusion they may have caused. I am a co-author of this article, which discusses the dominance of the visual medium in human perception and the trustworthiness of misleading audiovisual information compared to verbal messages (Hancock & Bailenson, 2021, p. 150). The article highlights the potential impact of deepfake deception, noting that visual manipulation can have a greater effect on individuals due to the primary role of visual communication in cognition.
Research has shown that manipulated videos are often perceived as more credible than audio or text-based misinformation, leading to serious consequences such as wrongful accusations and harm to innocent individuals (Sundar et al, 2021). This finding is consistent with the established understanding of the power of visual signals in shaping human perception.
Upon further review, a minor error was identified in the citation for Goldstein et al (2023). The correct authors for this study are Goldstein, J., Sastry, G., Musser, M., DiResta, R., Gentzel, M., and Sedova, K. This mistake was the result of not using my reference software, leading to the inclusion of inaccurate sources generated by GPT-4o.
Despite these errors, I stand by the key points presented in the article, as they are supported by empirical evidence from reputable sources. The corrected citations for Hancock and Bailenson (2021) and Vaccari and Chadwick (2020) further validate the arguments made in the report.
Overall, the article emphasizes the importance of understanding the impact of visual manipulation on communication and perception, and the need to critically evaluate information presented in audiovisual formats.
Source link