AI-Enhanced Cognitive Architectures for Autonomous Decision-Making in Complex Multi-Agent Systems
Keywords:
multi-agent systems, artificial intelligence, cognitive architecture, autonomous decision-making, reinforcement learning, distributed control, neuro-symbolic systems, agent coordination, decentralized AI, intelligent agentsAbstract
As multi-agent systems (MAS) become increasingly complex in real-world scenarios, there is a growing need for advanced frameworks that enable robust and autonomous decision-making. This paper introduces a novel cognitive architecture powered by artificial intelligence (AI), designed to enhance agent autonomy, adaptability, and coordination. The proposed framework integrates key AI techniques—including reinforcement learning, neuro-symbolic reasoning, and decentralized control—to support dynamic decision-making under uncertainty. Inspired by cognitive models of human intelligence, the architecture promotes emergent behavior, real-time learning, and collaborative problem-solving across agents. Simulation results in dynamic, conflict-prone environments demonstrate the framework’s superior adaptability and coordination capabilities compared to traditional MAS approaches
References
Busoniu L, Babuska R, De Schutter B. A comprehensive survey of multiagent reinforcement learning. IEEE Trans Syst Man Cybern C Appl Rev. 2008;38(2):156–172.
Gujjala, P.K.R. (2022). Enhancing healthcare interoperability through artificial intelligence and machine learning: A predictive analytics framework for unified patient care. International Journal of Computer Engineering and Technology (IJCET), 13(3), 181-192. https://doi.org/10.34218/IJCET_13_03_018
Newell A, Simon HA. Human problem solving. Englewood Cliffs: Prentice-Hall; 1972.
Shoham Y, Leyton-Brown K. Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge: Cambridge University Press; 2009.
Oleti, C.S. (2022). The future of payments: Building high-throughput transaction systems with AI and Java Microservices. World Journal of Advanced Research and Reviews, 16(03), 1401-1411. https://doi.org/10.30574/wjarr.2022.16.3.1281
Lowe R, Wu Y, Tamar A, Harb J, Abbeel P, Mordatch I. Multi-agent actor-critic for mixed cooperative-competitive environments. In: Advances in neural information processing systems. 2017;30.
Sun R. The CLARION cognitive architecture: Extending cognitive modeling to social simulation. Cognit Syst Res. 2006;7(1):74–88.
Oleti, C. S. (2022). Serverless intelligence: Securing J2EE-based federated learning pipelines on AWS. International Journal of Computer Engineering and Technology, 13(3), 163-180. https://doi.org/10.34218/IJCET_13_03_017
Gujjala, P.K.R. (2023). Advancing Artificial Intelligence and Data Science: A Comprehensive Framework for Computational Efficiency and Scalability. International Journal of Research in Computer Applications and Information Technology, 6(1), 155–166. https://doi.org/10.34218/IJRCAIT_06_01_012
Laird JE. The Soar cognitive architecture. MIT press; 2012.
Anderson JR. How can the human mind occur in the physical universe? Oxford: Oxford University Press; 2007.
Kaelbling LP, Littman ML, Cassandra AR. Planning and acting in partially observable stochastic domains. Artif Intell. 1998;101(1-2):99–134.
Oliehoek FA, Amato C. A concise introduction to decentralized POMDPs. Springer; 2016.
Stone P, Veloso M. Multiagent systems: A survey from a machine learning perspective. Auton Robots. 2000;8(3):345–383.
Vinyals O, Babuschkin I, Czarnecki WM, et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature. 2019;575(7782):350–354.
Russell SJ, Norvig P. Artificial intelligence: a modern approach. Pearson; 2010.
Koller D, Friedman N. Probabilistic graphical models: principles and techniques. MIT press; 2009.
Ghallab M, Nau D, Traverso P. Automated planning: theory and practice. Morgan Kaufmann; 2004.
Bengio Y, Deleu T, Rahaman N, Ke R, Lachapelle S. A meta-transfer objective for learning to disentangle causal mechanisms. arXiv preprint arXiv:1901.10912. 2019.
