
汪芷骐
PhD Student
College of IST, Penn State University

<aside>
<img src="/icons/git_gray.svg" alt="/icons/git_gray.svg" width="40px" /> Github
</aside>
<aside>
<img src="/icons/graduate_gray.svg" alt="/icons/graduate_gray.svg" width="40px" /> Google Scholar
</aside>
CV:
CV.pdf
<aside>
I am a first-year PhD student in the College of IST at Penn State University, advised by Professor Yuchen Yang.
Previously, I received my Bachelor’s Degree of Computer Science at RPI, where I was fortunate to be advised by Professor Lei Yu.
</aside>
💡Research Interests:
- ML Privacy: Machine Learning models should not reveal information beyond what's necessary to carry out the task. We want to minimize the risk of disclosing sensitive details to adversaries.
- Adversarial ML: Exploring methods to generate adversarial examples that intentionally manipulate machine learning systems to behave in ways that align with an adversary’s objectives, compromising the system’s intended functionality.
Topics I have investigated:
- Example difficulty in the image domain.
- Membership Inference Attacks to Traditional ML.
- Membership Inference Attacks to LLM.
- Adversarial prompts to LLM.
- LLM alignment faking.
Publications:
Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble
Z. Wang, C. Zhang, Y. Chen, N. Baracaldo, S. Kadhe, L. Yu
To appear on ACM CCS 2025
GitHub - RPI-DSPlab/mia-disparity: This repository contains the source code for "Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble", accepted by ACM CCS 2025.