AI in Financial Fraud Detection: A Comparative Analysis of Machine Learning Models

Main Article Content

Prof. Robert Dam

Abstract

Financial fraud detection is a critical application of AI in the financial industry. This paper compares various machine learning models, including logistic regression, decision trees, and neural networks, for detecting fraudulent transactions. By evaluating these models on real-world data, we provide insights into their effectiveness and propose strategies for improving fraud detection accuracy.

Downloads

Download data is not yet available.

Article Details

How to Cite
AI in Financial Fraud Detection: A Comparative Analysis of Machine Learning Models. (2023). Research-Gate Journal, 9(9). https://research-gate.in/index.php/Rgj/article/view/39
Section
Articles

How to Cite

AI in Financial Fraud Detection: A Comparative Analysis of Machine Learning Models. (2023). Research-Gate Journal, 9(9). https://research-gate.in/index.php/Rgj/article/view/39

References

Nadella, G. S., Gonaygunta, H., Meduri, K., & Satish, S. (2023). Adversarial Attacks on Deep Neural Network: Developing Robust Models Against Evasion Technique. Transactions on Latest Trends in Artificial Intelligence, 4(4)

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165. https://doi.org/10.48550/arXiv.2005.14165

Chollet, F. (2017). Deep learning. MIT Press.

Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. https://doi.org/10.48550/arXiv.1810.04805

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.

Meduri, K., Nadella, G. S., Gonaygunta, H., & Meduri, S. S. (2023). Developing a Fog Computing-based AI Framework for Real-time Traffic Management and Optimization. International Journal of Sustainable Development in Computing Science, 5(4), 1-24.

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778. https://doi.org/10.1109/CVPR.2016.90

Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. https://doi.org/10.48550/arXiv.1503.02531

Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. Proceedings of the International Conference on Learning Representations (ICLR). https://doi.org/10.48550/arXiv.1412.6980

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. https://doi.org/10.1038/nature14539

Li, Y., & Zhou, H. (2020). A survey on deep learning in medical image analysis. Medical Image Analysis, 64, 101766. https://doi.org/10.1016/j.media.2020.101766

Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems (NeurIPS), 4765-4774. https://doi.org/10.5555/3295222.3295230

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533. https://doi.org/10.1038/nature14236

Rajpurkar, P., Irvin, J., Zhu, K., K_classifier, K., & Nguyen, D. (2017). CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning. Proceedings of the 30th Conference on Neural Information Processing Systems (NeurIPS), 1-11. https://doi.org/10.48550/arXiv.1711.05225

Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85-117. https://doi.org/10.1016/j.neunet.2014.09.003

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484-489. https://doi.org/10.1038/nature16961

Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. Proceedings of the International Conference on Learning Representations (ICLR). https://doi.org/10.48550/arXiv.1409.1556

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1), 1929-1958. https://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Kaiser, Ł. (2017). Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems (NeurIPS), 5998-6008. https://doi.org/10.5555/3295222.3295349

Wu, Y., & He, K. (2018). Group normalization. Proceedings of the European Conference on Computer Vision (ECCV), 3-19. https://doi.org/10.1007/978-3-030-01234-2_1

Zhang, X., Zhou, X., Lin, M., & Sun, J. (2018). Shufflenet: An extremely efficient convolutional neural network for mobile devices. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6848-6856. https://doi.org/10.1109/CVPR.2018.00716

Zhao, L., & Zhang, W. (2020). A survey of AI-based approaches to cybersecurity. Journal of Cybersecurity, 4(1), tyz009. https://doi.org/10.1093/cyber/tyz009

Gonaygunta, H., Meduri, S. S., Podicheti, S., & Nadella, G. S. (2023). The Impact of Virtual Reality on Social Interaction and Relationship via Statistical Analysis. International Journal of Machine Learning for Sustainable Development, 5(2), 1-20.

Most read articles by the same author(s)

1 2 3 4 5 6 > >>