With the affordances and advancement of Artificial Intelligence-based systems and machines, the application and relevance to multiple fields and disciplines have been ever-increasing. The application of Artificial Intelligence (AI) in multiple sectors, if not every sector, is indicative of its broad spectrum of uses. In very broad terms, AI is “intelligence” displayed by machines that mimic the natural cognitive skills of humans. A modern-day definition relates not to the confines of AI’s ability to articulate, but rather to rationality and the ability to act rationally.
AI is a wide field of research that consists of multiple sub-domains, each with its applications and use. In general, the use of AI-based systems and machines aims to enhance, augment, and automate tasks that require human-like reasoning, problem-solving, and perception-based skills. Ultimately, as with human-based work, the end goal is to take a set of tasks and instructions that need to be articulated to reach a specific goal.
There is a problem
AI uses cases for the advancement and betterment of society which are only a Google search away, and there are exemplary examples of its application that have aided organisations, companies, communities, and countries. However, AI does not come without its own set of challenges, including risks from individuals acting in bad faith, the weaponisation of AI, biased data, job loss, and existential issues. The application thereof is mostly dictated and developed by the ability and capacity of an individual/organisation’s creativity, potentially bringing about incorrect or harmful outcomes (sometimes with intent).
It is closer to home than we think
Unfortunately, AI's harmful use and application within the academic environment have become an increasing phenomenon. As much as AI is being used to find innovative solutions to research questions, so much so it is being used to circumvent detection efforts or for the falsification of research data. Historically the use of AI-based systems and tools was confined to specific disciplines and labs. However, today these systems are readily available and need little to no skill in obtaining their benefit. Online tutorials and ready-made tools are freely available to showcase and use. There are multiple reasons why researchers would falsify research (e.g., career and funding pressures, institutional oversight, inadequate training, etc.) which ultimately relates to questionable integrity practices. Many of these methods and subversion efforts are extremely hard to detect and interrogate as the systems and tools are ever-evolving, and current detection efforts age within a very short time frame.
It is not all bad news
The use of AI in research has many benefits and the use thereof should not be discouraged. It is when an individual or group of researchers purposely fail to acknowledge the use thereof and where the intent of the researcher(s) is to avoid detection of generated data/processes which should concern us. The following can serve as guiding questions/notes when using AI as part of your research process:
• Have I ensured that the data is generated ethically?
• Can I ensure that the data has all forms of bias removed?
• Will the generated data harm or affect a specific group of individuals?
• State the use and purpose of AI methods and tools as part of the research process when writing a research proposal, collecting data, and writing up research findings
• Question the collection methods/algorithms used when using open datasets
The key to it all
“Removing improper incentives, training researchers, and imposing better governance is vital to reducing research misconduct. Awareness of the possibility of misconduct and formalised procedures that scrutinize study trustworthiness is important during peer review and in systematic reviews.” - Li, W., Gurrin, L. C. and Mol, B. W. (2022) “Violation of Research Integrity Principles Occurs More Often Than We Think,” Reproductive biomedicine online, 44(2), pp. 207–209. Click here to read it.
The above statement is the most effective form of combating and preventing the falsification of research data in general, and with the use of AI. The efforts are also not an individual's responsibility, but that of a whole organisation that strives to ensure sound and ethical research practices.
This presentation will provide an overview of the current trends, preventative measure, and training involved to counter the use of AI for the purposes of faking research data