Interpreting AI for Networking: Where We Are and Where We Are Going
Deputy Librarian, NGM College, Pollachi 642001 E-Mail: ngmcollegelibrary@gmail.com, Mob: 9788175456 ORCID: https://orcid.org/0000-0003-1006-158X
Keywords:
Artificial Intelligence (AI), Networking, XAIAbstract
Artificial Intelligence (AI) has rapidly transcended the realm of academic research to become a dominant force in various sectors, including networking. The multifaceted nature of networking poses intricate problems that traditional deterministic methods struggle to solve efficiently. As such, the incorporation of AI techniques offers solutions that not only optimize performance but also enhance the adaptability and resilience of networking systems. However, despite the promising capabilities of AI algorithms, their complex nature often renders them opaque or unintelligible to human users. This opacity significantly hampers the commercial viability of AI-based networking solutions. Therefore, a pressing need exists for the development of Explainable AI (XAI) methodologies that make these systems interpretable, manageable, and ultimately, trustworthy.
References
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on methods and applications of explainable artificial intelligence. Journal of King Saud University-Computer and Information Sciences, 30(3), 304-318.
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. Fairness, Accountability, and Transparency in Machine Learning, 13, 1-22.
Miller, T. (2022). Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence, 267, 1-38.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144).
Sharma, N., Kartik, R., & Pandey, M. (2022). Explainable AI for Network Intrusion Detection: A Review. International Journal of Information Security, 21(4), 257-275.
Wang, R., Zhang, C., & Huang, Y. (2023). A Dual-Layer Interpretability Framework for Machine Learning Models in Network Traffic Classification. IEEE Transactions on Network and Service Management, 20(3), 1-15.
Watts, J. & Imhof, J. (2023). Visualizing Machine Learning in Networking: A Framework for Interpretability. Journal of Network and Computer Applications, 209, 103600.
Yang, T., Wang, S., Li, J., & Khan, M. B. (2023). Hybrid AI Models for Intelligent Networking: Challenges and Opportunities. Future Generation Computer Systems, 137, 354-367.
Zhang, Y. (2023). Interdisciplinary Collaboration in AI: Bridging Gaps in Networking Research. IEEE Internet of Things Journal.
Zhou, Z., Chen, H., & Wu, J. (2021). Machine Learning Techniques for Network Performance Prediction: A Survey. IEEE Communications Surveys & Tutorials, 23(3), 1682-1716.
Doshi-Velez, F., & Kim, P. (2017). Towards a rigorous science of interpretable machine learning. Proceedings of the 5th International Conference on Learning Representations (ICLR)
Lundberg, S. M., & Lee, S. I. (2017). Massive feature evaluation via a Generalized Shapley Value. Proceedings of the 31th Conference on Neural Information Processing Systems (NeurIPS)
Mada, A. (2021). Ethical implications of AI in cybersecurity: It’s not just about explainability. Journal of Computer Security, 29(3), 255-274.
Mothilal, R. K., DiMarco, P., & Lipton, Z. C. (2020). Designing interpretability tools that facilitate human-AI collaboration. In A. B. S. & C. D. E. (Eds.), Proceedings of the IEEE International Conference on Data Mining (ICDM)
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
Sharma, G., Kumar, S., & Kaur, S. (2022). Efficacy of Explainable AI in Network Intrusion Detection Systems: A Review. IEEE Transactions on Network and Service Management, 19(1), 1-15.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 IJANA - International Journal of Advanced Networking and Applications
This work is licensed under a Creative Commons Attribution 4.0 International License.
IJANA published open access under a CC BY license (Creative Commons Attribution 4.0 International License). The CC BY license allows for maximum dissemination and re-use of open access materials and is preferred by many research funding bodies. Under this license users are free to share (copy, distribute and transmit) and remix (adapt) the contribution including for commercial purposes, providing they attribute the contribution in the manner specified by the author or licensor http://Creativecommons.org//license/by/4.0/.. It allows to use, reuse, distribute and reproduce the original work with proper citation.