Follow
Chawin Sitawarin
Chawin Sitawarin
Postdoctoral Researcher @ Meta
Verified email at meta.com - Homepage
Title
Cited by
Cited by
Year
Enhancing robustness of machine learning systems via data transformations
AN Bhagoji, D Cullina, C Sitawarin, P Mittal
2018 52nd Annual Conference on Information Sciences and Systems (CISS), 1-5, 2018
443*2018
Darts: Deceiving autonomous cars with toxic signs
C Sitawarin, AN Bhagoji, A Mosenia, M Chiang, P Mittal
arXiv preprint arXiv:1802.06430, 2018
388*2018
Analyzing the robustness of open-world machine learning
V Sehwag, AN Bhagoji, L Song, C Sitawarin, D Cullina, M Chiang, P Mittal
Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security …, 2019
982019
Beyond grand theft auto V for training, testing and enhancing deep learning in self driving cars
M Martinez, C Sitawarin, K Finch, L Meincke, A Yablonski, A Kornhauser
arXiv preprint arXiv:1712.01397, 2017
812017
Sat: Improving adversarial training via curriculum-based loss smoothing
C Sitawarin, S Chakraborty, D Wagner
Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security …, 2021
73*2021
Inverse-designed photonic fibers and metasurfaces for nonlinear frequency conversion
C Sitawarin, W Jin, Z Lin, AW Rodriguez
Photonics Research 6 (5), B82-B89, 2018
66*2018
On the robustness of deep k-nearest neighbors
C Sitawarin, D Wagner
2019 IEEE Security and Privacy Workshops (SPW), 1-7, 2019
65*2019
StruQ: Defending against prompt injection with structured queries
S Chen, J Piet, C Sitawarin, D Wagner
arXiv preprint arXiv:2402.06363, 2024
342024
Jatmo: Prompt injection defense by task-specific finetuning
J Piet, M Alrashed, C Sitawarin, S Chen, Z Wei, E Sun, B Alomair, ...
Computer Security – ESORICS 2024, 2024
342024
Defending against adversarial examples with k-nearest neighbor
C Sitawarin, D Wagner
arXiv preprint arXiv:1906.09525, 2019
312019
Pal: Proxy-guided black-box attack on large language models
C Sitawarin, N Mu, D Wagner, A Araujo
arXiv preprint arXiv:2402.09674, 2024
242024
Minimum-norm adversarial examples on KNN and KNN based models
C Sitawarin, D Wagner
2020 IEEE Security and Privacy Workshops (SPW), 34-40, 2020
232020
Mark my words: Analyzing and evaluating language model watermarks
J Piet, C Sitawarin, V Fang, N Mu, D Wagner
arXiv preprint arXiv:2312.00273, 2023
222023
Demystifying the adversarial robustness of random transformation defenses
C Sitawarin, ZJ Golan-Strieb, D Wagner
International Conference on Machine Learning, 20232-20252, 2022
212022
Better the devil you know: An analysis of evasion attacks using out-of-distribution adversarial examples
V Sehwag, AN Bhagoji, L Song, C Sitawarin, D Cullina, M Chiang, P Mittal
arXiv preprint arXiv:1905.01726, 2019
212019
Vulnerability detection with code language models: How far are we?
Y Ding, Y Fu, O Ibrahim, C Sitawarin, X Chen, B Alomair, D Wagner, ...
arXiv preprint arXiv:2403.18624, 2024
182024
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
N Hingun, C Sitawarin, J Li, D Wagner
Proceedings of the IEEE/CVF international conference on computer vision (ICCV), 2023
122023
Part-Based Models Improve Adversarial Robustness
C Sitawarin, K Pongmala, Y Chen, N Carlini, D Wagner
The Eleventh International Conference on Learning Representations, 2023
112023
Not all pixels are born equal: An analysis of evasion attacks under locality constraints
V Sehwag, C Sitawarin, AN Bhagoji, A Mosenia, M Chiang, P Mittal
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications …, 2018
92018
OODRobustBench: a benchmark and large-scale analysis of adversarial robustness under distribution shift
L Li, Y Wang, C Sitawarin, M Spratling
Proceedings of the 41st International Conference on Machine Learning, 2024
8*2024
The system can't perform the operation now. Try again later.
Articles 1–20