|

The problems of choosing values for artificial intelligence systems

Authors: Maykova V.P., Molchan E.M., Maykov A.I. Published: 04.05.2021
Published in issue: #2(88)/2021  
DOI: 10.18698/2306-8477-2021-2-715  
Category: The Humanities in Technical University | Chapter: Philosophy Science  
Keywords: artificial intelligence systems, values, ethics, moral values, moral norms, moral principles, superintelligence, cyborg machine

The paper considers the problems of choosing values for artificial intelligence systems, which in the near future will become independent agents of objective reality and will be included in human life activity. The study shows that in the context of urgent problems of the changing reality, modern society faces the task of both the choice of values for artificial intelligence systems and the technical and regulatory stages of their implementation. The concept of “value” is replaced by a preference that acts as an information-idealized form embedded in the software of cyborg systems. Findings of research show that the choice and inclusion of values in cyborg machines is associated with the customer’s priorities. However, due to the self-learning of artificial intelligence systems, unpredictable synergistic effects are possible, or intelligence systems will switch to superintelligence, which can result in their uncontrollability. This requires a new methodology that determines the basis of modern scientific and technological progress, cognitive, i.e. epistemological, information development in order to form a legal and ethical framework in response to the challenges of digital globalization, which is transforming into globalizing individualism.


References
[1] Sutrop M. Should we trust artificial intelligence? Trames, 2019, vol. 23, no. 4, pp. 499–522.
[2] Brundage M., Avin S., et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Available at: https://maliciousaireport.com (accessed March 5, 2021).
[3] EU Commission. A Definition of AI: Main Capabilities and Disciplines. Available at: https://www.aepd.es/media/docs/ai-definition.pdf (accessed March 5, 2021).
[4] Fjeld J., Achten N., Hilligoss H., Nagy A.C., Srikumar M. Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication, 2020, no. 1, pp. 1–71.
[5] Gabriel I. Artificial intelligence, values, and alignment. Minds and Machines, 2020, vol. 30, pp. 411–437.
[6] Berberich N., Diepold K. The virtuous machine — old ethics for new technology? ArXiv, 2018, pp. 1–25. Available at: https://deepai.org/publication/the-virtuous-machine-old-ethics-for-new-technology (accessed March 5, 2021).
[7] Russell S. Human Compatible. AI and the Problem of Control. London, Allen Lane, Penguin Books, 2019, 376 p.
[8] Etzioni А. Incorporating ethics into artificial intelligence. In: Etzioni A., ed. Happiness is the Wrong Metric: A Liberal Communitarian Response to Populism. Luxemburg, Springer, 2018, pp. 235–252. Available at: https://www.springer.com/gp/book/9783319696225 (accessed March 5, 2021).
[9] EU Commission Ethics Guidelines for Trustworthy AI. Available at: https://ec.europa.eu/futurium/en/ai-alliance-consultation (accessed March 5, 2021).
[10] Everyday Ethics for Artificial Intelligence. Available at: https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf (accessed March 5, 2021).
[11] Ethically Aligned Design. A Vision for Prioritizing Human well-being with autonomous and intelligent systems. Available at: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf (accessed March 5, 2021).
[12] Susan M. Wolf. Two levels of pluralism. Ethics, 1992, vol. 102, no. 4, pp. 785–798.
[13] Berlin I. The Pursuit of the Ideal. In: Hardy H., ed. The Crooked Timber of Humanity. Princeton & Oxford, Princeton University Press, 2013, 345 p.
[14] Maykova V.P., Molchan E.M. Prednachala filosofii virtualnoy realnosti [The beginning of the philosophy of virtual reality]. Moscow, Sputnik+ Publ., 2020, 65 p.