Konuşmacılar
Açıklama
The limitations of artificial intelligence (AI) technologies in human interactions become particularly evident in decision-making processes. When AI systems are unable to make reliable decisions about situations they have not experienced, trust in these technologies can be undermined. Epistemologically, the difficulties AI faces in making decisions about previously unexperienced situations lead us to question the reliability of these systems. Ethically, the ability of AI to speak authoritatively about situations it cannot personally experience raises serious concerns. In this context, the concept of “trust” is a significant issue that requires comprehensive examination. A widespread sense of trust encompasses an individual’s entire being, which can lead to a blurring of the distinction between humans and artificial intelligence. Although it may seem natural to consider AI as a “trusted friend” or “experienced acquaintance,” consulting and heeding the advice of such figures creates a very different dynamic. Therefore, in the trustor-trustee relationship, whether AI can genuinely replace a trusted and experienced acquaintance is a critical question. One approach to this issue highlights the limitations AI systems face in ethical decision-making processes and their inadequacies in making decisions based on human experience, asserting that AI cannot meet ethical standards and therefore cannot be accepted as a reliable advisor (Lehner, Ittonen, Silvola, & Ström, 2022). On the other hand, it is argued that AI systems have the potential to improve ethical decision-making processes, minimizing human errors and making more fair and objective decisions; when properly programmed, AI can effectively implement ethical standards (Lai, Carton, Bhatnagar, Liao, Zhang, & Tan, 2021). In this study, aligning more closely with the first view, I argue that considering AI as a “reliable decision-making authority” in situations it has not experienced poses a risk of misleading humans, particularly in ethical issues and matters requiring experience. By examining trust in AI from epistemological and ethical perspectives, I aim to discuss that if AI’s lack of experience continues, the negative impacts on human experience-based decision-making are likely to outweigh the positive ones.
Keywords: Artificial Intelligence (AI), ethical decision-making, epistemological limitations, trust in AI, human experience.
Institution / Affiliation / Kurum
Recep Tayyip Erdogan University, Faculty of Divinity
Presentation language / Sunum Dili | EN (English) |
---|---|
Disciplines / Disiplinler | Philosophy / Felsefe |
E-mail / E-posta | busamet@gmail.coö |
ORCID ID | 0000-0003-0725-3396 |
Country / Ülke | Turkiye |