Читать онлайн

The last few years have witnessed a rapid penetration of artificial intelligence (AI) into different walks of life including medicine, judicial system, public governance and other important activities. Despite multiple benefits of these technologies, their widespread dissemination raises serious concerns as to whether they are trustworthy. The article provides an analysis of the key factors behind public mistrust in AI while discussing ways to build confidence. To understand the reasons of mistrust, the author invokes the historical context, social study findings as well as judicial practices. A special focus is made on the security of AI use, AI visibility to users and on decision-making responsibility. The author also discusses the current regulatory models in this area including the development of universally applicable legal framework, regulatory sandboxes and self-regulation mechanisms for the sector, with multidisciplinary collaboration and adaptation of the effective legal system to become a key factor of this process. Only this approach will producer a balanced development and use of AI systems in the interest of all stakeholders, from their vendors to end users. For a more exhaustive coverage of this subject, the following general methods are proposed: analysis, synthesis and systematization; special legal (comparative legal and historic legal) research methods. In analyzing the available data, the author argues for a comprehensive approach to make AI trustworthy. The following hypothesis is proposed based on the study’s findings. Trust in AI is a cornerstone of efficient regulation of AI development and use in various areas. The author is convinced that, with AI made transparent, safe and reliable one, provided with human oversight through adequate regulation, the government will maintain purposeful collaboration between man and technologies thus setting the stage for AI use in critical infrastructures affecting life, health and basic rights and interests of individuals.

Ключевые фразы: self-regulation for ai system development, artificial intelligence, trust in ai systems, system transparency, system visibility, system security, system reliability, regulatory model, regulatory sandbox
Автор (ы): Вашурина Светлана Сергеевна
Журнал: LEGAL ISSUES IN THE DIGITAL AGE

Предпросмотр статьи

Идентификаторы и классификаторы

УДК
340. Право в целом. Пропедевтика. Методы и вспомогательные правовые науки
Для цитирования:
ВАШУРИНА С. С. TRUST IN ARTIFICIAL INTELLIGENCE: REGULATORY CHALLENGES AND PROSPECTS // LEGAL ISSUES IN THE DIGITAL AGE. 2025. № 2
Текстовый фрагмент статьи