My name is Dipkamal Bhusal, and I am currently a fourth-year Computer Science Ph.D. candidate at Golisano College of Computing and Information Sciences, Rochester Institute of Technology (RIT). My research focuses on reliable deep learning, with a particular emphasis on explainable machine learning.
Prior to my PhD, I obtained my Master's in Information Engineering at Pulchowk Campus, Tribhuvan University (Nepal) in 2021 with a core focus on machine learning for medical diagnosis. I graduated as an electronics and communication engineer from Pulchowk Campus, Tribhuvan University (Nepal) in 2016.
From 2016-2021, I was involved as a co-founder of Paaila Technology, a robotics and AI startup based in Kathmandu, that worked on the development of service robots, and chatbots. Throughout my tenure, I played multifaceted roles as a developer, project manager, and director. You can read more about Paaila Technology here. Before pursuing my PhD, I was also involved as a lecturer of computer science at IIMS College, Kathmandu from 2020 to 2021.
Ayushi Mehrotra, Dipkamal Bhusal (Mentorship), Nidhi Rastogi
2nd Workshop on Attributing Model Behavior at Scale at NeurIPS 2024
This work introduces Hessian Sets, a technique that leverages the Hessian matrix to detect and attribute pairwise feature interactions in image classifiers. We adapt Integrated Directional Gradients (IDG) to assign importance to these feature interaction sets. By integrating segmentation masks from the Segment Anything Model (SAM), we provide more interpretable and concise explanations. This was my first paper as a mentor. While this was a work in progress, we are extending the work currently and plan to resubmit the work soon.
Paper | Code (Coming Soon)
Dipkamal Bhusal, Md Tanvirul Alam, Le Nguyen, Ashim Mahara, Zachary Lightcap, Rodney, Romy, Grace, Benjamin, Nidhi Rastogi
40th Annual Computer Security Applications Conference (ACSAC) 2024
Large Language Models (LLMs) have demonstrated potential in cybersecurity applications but have also caused lower confidence due to problems like hallucinations and a lack of truthfulness. Existing benchmarks provide general evaluations but do not sufficiently address the practical and applied aspects of LLM performance in cybersecurity-specific tasks. To address this gap, we introduce the SECURE (Security Extraction, Understanding & Reasoning Evaluation), a benchmark designed to assess LLMs performance in realistic cybersecurity scenarios.
Md Tanvirul Alam*, Dipkamal Bhusal*, Le Nguyen, Nidhi Rastogi (* Equal Contribution)
39th Annual Conference on Neural Information Processing Systems (NeurIPS) 2024
We extend the knowledge intensive LLM evaluation framework proposed in SECURE and evaluate LLMs in CTI-specific tasks. Cyber threat intelligence (CTI) is crucial in today’s cybersecurity landscape, providing essential insights to understand and mitigate the ever-evolving cyber threats. The recent rise of Large Language Models (LLMs) have shown potential in this domain, but concerns about their reliability, accuracy, and hallucinations persist. CTIBench is a benchmark designed to assess LLMs’ performance in CTI applications. C
Dipkamal Bhusal, Md Tanvirul Alam, Monish K. , Michael Clifford, Sara Rampazzi, Nidhi Rastogi
9th IEEE European Symposium on Security and Privacy (EuroS&P) 2024
We develop a practical method for utilizing sensitivity of model prediction and feature attribution to detect adversarial samples. Our method, PASA, requires the computation of two test statistics using model prediction and feature attribution and can reliably detect adversarial samples using thresholds learned from benign samples. We validate our lightweight approach by evaluating the performance of PASA on varying strengths of FGSM, PGD, BIM, and CW attacks on multiple image and non-image datasets
Dipkamal Bhusal, Rosalyn Shin, Ajay Ashok Shewale, Monish Kumar Manikya Veerabhadran, Michael Clifford, Sara Rampazzi, Nidhi Rastogi
18th International Conference on Availability, Reliability and Security(ARES) 2023
This paper provides a comprehensive analysis of explainable methods and demonstrates their efficacy in three distinct security applications: anomaly detection using system logs, malware prediction, and detection of adversarial images. Our quantitative and qualitative analysis1 reveals serious limitations and concerns in state-of-the-art explanation methods in all three applications.
A summary of interesting papers I have come across. I will try to update this page frequently.
A note on understanding non-negative matrix factorization.