Research

What if robustness, generalization, and interpretability are governed by the same geometric property?

My research investigates how the spectral properties of neural network weight matrices govern trustworthiness across deployment contexts. The central question: if we constrain the geometry of learned representations, can we simultaneously resolve robustness failures, generalization failures, and opacity as manifestations of the same underlying pathology?

Forthcoming publication, 2026. Patent filed.

Geometric Machine LearningRepresentation LearningTrustworthy AICausal InferencePrivacy-Preserving MLClinical AI Systems

Selected Work

Featured

Computational Cognitive Modeling of Human Emotion

RoBERTa-large fine-tuning on GoEmotions (27 labels) with multi-GPU training and layerwise representation analysis

PythonPyTorchTransformersRoBERTascikit-learn+8

Attention-Enhanced Interpretability in VGG16 for Object Recognition

Modified VGG16 with per-layer attention masks compared against saliency maps via correlation/IoU/SSIM/KL metrics

PythonTensorFlow 2.xVGG16NumPyOpenCV+6

Spotify Song Analysis

Exploratory/ML analysis of Spotify track features to predict popularity and explore patterns

PythonpandasNumPyscikit-learnMatplotlib+3
Featured

Inductive-Bias Study: CNN vs FCNN on MNIST (2-D Latent Evolution)

Equal-capacity CNN vs FCNN constrained to 2-D latent with embedding evolution videos showing CNN inductive bias

PythonPyTorch ≥2.0JupyterMatplotlibimageio+6
Featured

Fairness Audit of Jigsaw Toxicity Classifier

BERT/LSTM/GPT-2 models with subgroup AUCs, demographic/error parity, SHAP explainability, and custom SHarP fairness metric

PythonPyTorchTransformersSHAPscikit-learn+7

Deep Q-Learning for Atari Boxing-v5 (FML Course)

DQN-based reinforcement learning agent trained on Atari Boxing-v5 with evaluation results

PythonPyTorchGymnasiumDQNReinforcement Learning+2
Report →Access pending