Seal

Xi Wu

I am a software engineer at Google. My current primary interest is the interplay of AI and databases for knowledge extraction, representation, and management, and how to leverage that for agentic reasoning. I am also interested in building higher-level query foundations for multi-granular data analysis. Our work in data analytics was highlighted in Future of Data Science, Google Cloud Next 2025 (our approach). In addition to my core work, I also conduct research in machine learning and trustworthy AI.

Prior to Google, I received my Ph.D. in Computer Science from the University of Wisconsin-Madison, advised by Jeffrey F. Naughton and Somesh Jha. I was awarded a Google PhD Fellowship in 2016.

Publications

Two Heads are Actually Better than One: Towards Better Adversarial Robustness via Transduction and Rejection
Nils Palumbo, Yang Guo, Xi Wu, Jiefeng Chen, Yingyu Liang, Somesh Jha
ICML 2024, arXiv 2023
[short intro]
While combining transduction and rejection offers theoretical promises, prior methods often fail in practical deep learning settings against strong adversarial attacks. This paper bridges this gap by repurposing a reduction technique—originally used to identify vulnerabilities—to construct an effective transductive defense. We demonstrate improved sample complexity for robust generalization and achieve significantly higher robust accuracy (e.g., 81.6% on CIFAR-10) against state-of-the-art attacks like AutoAttack and GMSA.

Stratified Adversarial Robustness with Rejection
Jiefeng Chen, Jayaram Raghuram, Jihye Choi, Xi Wu, Yingyu Liang, Somesh Jha
ICML 2023, arXiv 2023, AAAI Workshop on Adversarial Machine Learning and Beyond 2022 (Oral Presentation and Best Paper Award)
[code]
Zhenmei Shi, Jiefeng Chen, Kunyang Li, Jayaram Raghuram, Xi Wu, Yingyu Liang, Somesh Jha
ICLR 2023 (Spotlight)
[short intro] [code]
Foundation models aim for both universality and label efficiency, but these goals often conflict. This work provides a theoretical analysis showing that while diverse pre-training data improves universality, it can dilute task-specific features and increase sample complexity. We propose a contrastive regularization method to navigate this trade-off, validated across multiple real-world datasets.

Towards Evaluating the Robustness of Neural Networks Learned by Transduction
Jiefeng Chen, Xi Wu, Yang Guo, Yingyu Liang, Somesh Jha
ICLR 2022, arXiv 2021
[code]
Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles
Jiefeng Chen, Frederick Liu, Besim Avci, Xi Wu, Yingyu Liang, Somesh Jha
NeurIPS 2021, arXiv 2021
[code]
ATOM: Robustifying Out-of-Distribution Detection Using Outlier Mining
Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang and Somesh Jha
ECML 2021, arXiv 2020, ICML UDL 2020
[code]
Firas Abuzaid, Peter Kraft, Sahhana Suri, Edward Gan, Eric Xu, Atul Shenoy, Avsin Anathanarayan, John Sheu, Erik Meijer, Xi Wu, Jeffrey F. Naughton, Peter Bailis, Matei Zaharia
The VLDB Journal (2021), VLDB 2019 (Invited to "Best of VLDB 2019" Special Issue)
[short intro]
Modern explanation engines often operate as standalone tools, limiting their interoperability with SQL-based workflows. We propose DIFF, a relational aggregation operator that unifies data explanation with declarative query processing. Deployed in production at companies like Microsoft and Facebook,

Concise Explanations for Neural Networks using Adversarial Training
Prasad Chalasani, Jiefeng Chen, Amrita Roy Chowdhury, Somesh Jha, Xi Wu
ICML 2020, arXiv 2018
[code]
Jiefeng Chen, Xi Wu, Vaibhav Rastogi, Yingyu Liang, Somesh Jha
NeurIPS 2019, arXiv 2019
[short intro] [code, slides, poster, Alta Cognita]
Standard models often yield fragile interpretations that are easily manipulated. This work addresses the problem through axiomatic attribution, proposing Robust Attribution Regularization (RAR). By incorporating an IG-based objective \(\min_{\theta} \mathbb{E} [ \mathcal{L}(f_\theta(x), y) + \lambda \Omega(IG(x)) ]\) into training, we unify robust prediction with robust explanation, ensuring that model interpretations remain stable even under adversarial conditions.
Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks
Jiefeng Chen, Xi Wu, Vaibhav Rastogi, Yingyu Liang, Somesh Jha
EuroS&P 2019, arXiv 2018
[code]
Tuple-oriented Compression for Large-scale Mini-batch Stochastic Gradient Descent
Fengan Li, Lingjiao Chen, Yijing Zeng, Arun Kumar, Xi Wu, Jeffrey F. Naughton, Jignesh M. Patel
SIGMOD 2019, arXiv 2017
Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Xi Wu, Uyeong Jang, Jiefeng Chen, Lingjiao Chen, Somesh Jha
ICML 2018, arXiv 2017
[slides]
Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics
Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, Jeffrey F. Naughton
SIGMOD 2017, arXiv 2016
[short intro] [slides]
Scaling private SGD often involves a trade-off between model accuracy and system overhead. We remedy this disconnect with a novel bolt-on approach using output perturbation. By providing a new analysis of the \(L_2\)-sensitivity of SGD, our algorithm integrates seamlessly into scalable RDBMS-based systems like Bismarck, incurring virtually no overhead while yielding up to 4X better test accuracy.

Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning
Uyeong Jang, Xi Wu, Somesh Jha
ACSAC 2017
A Study of Stability in Data Privacy
Advisors: Jeffrey F. Naughton, Somesh Jha
Ph.D. Thesis, UW-Madison, August 2016
ProQuest
A Methodology for Modeling Model-Inversion Attacks
Xi Wu, Matthew Fredrikson, Somesh Jha, Jeffrey F. Naughton
CSF 2016
[slides]
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami
S&P (Oakland) 2016, arXiv 2015
A Completeness Theory for Polynomial (Turing) Kernelization
with Danny Hermelin, Stephan Kratsch, Karolina Soltys, Magnus Wahlstrom
Algorithmica 2015, IPEC 2013
Uncertainty Aware Query Execution Time Prediction
Wentao Wu, Xi Wu, Hakan Hacigümüs, Jeffrey F. Naughton
VLDB 2014
with Danny Hermelin
SODA 2012, ECCC 2011
[short intro] [slides]
Proving polynomial lower bounds for kernelization is a central challenge in parameterized complexity. We introduce the framework of weak compositions to derive tight lower bounds for various natural problems, such as \(d\)-Set Cover and Hitting Set. We also strengthen super-polynomial bounds to super-quasi-polynomial ones, linking the existence of efficient kernels to the stability of the exponential hierarchy.
COREMU: A Scalable and Portable Parallel Full-system Emulator
Zhaoguo Wang, Ran Liu, Yufei Chen, Xi Wu, Haibo Chen, Binyu Zang
PPoPP 2011
Extended Islands of Tractability for Parsimony Haplotyping
with Rudolf Fleischer, Jiong Guo, Rolf Niedermeier, Johannes Uhlmann, Yihui Wang, Mathias Weller
CPM 2010
Experimental Study of FPT Algorithms for the Directed Feedback Vertex Set Problem
with Rudolf Fleischer, Liwei Yuan
ESA 2009
[slides]
Control Flow Obfuscation with Information Flow Tracking
Haibo Chen, Liwei Yuan, Xi Wu, Bo Huang, Pen-chung Yew, Binyu Zang
MICRO 2009
From Speculation to Security: Practical and Efficient Information Flow Tracking using Speculative Hardware
Haibo Chen, Xi Wu, Liwei Yuan, Binyu Zang, Pen-chung Yew, Frederic T. Chong
ISCA 2008

Manuscripts

Xi Wu, Eugene Wu, Zichen Zhu, Fengan Li, Jeffrey F. Naughton
arXiv 2025
[short intro]
Analytical data naturally lives at multiple granularities, yet SQL's data model has no container that holds multiple relations at different grains as a single composable result. This forces users to either join into one flat table — causing silent fan-out errors — or manage separate tables that do not compose. Grained Relational Algebra (GRA) is a minimal, fully compatible extension to relational algebra designed for multi-grain data analytics. GRA addresses this structural gap by introducing the Grained Relation Space (GRS): a principled, schema-level container that organizes relations by their dimensional footprint rather than ad-hoc table names. Within a GRS, each relation retains its native grain, operators produce a GRS as output (algebraic closure), and grain-aware analysis models (GAMs) encapsulate multi-grain computation as reusable, composable modules. The result is grain safety by design: if a required grain is absent, the system produces an explicit error rather than a silent wrong answer. GRA has been implemented and deployed at Google as a query service, serving hundreds of internal teams at scale.
Towards Adversarial Robustness via Transductive Learning
Jiefeng Chen, Yang Guo, Xi Wu, Tianqi Li, Qicheng Lao, Yingyu Liang, Somesh Jha
arXiv 2021
Robust Out-of-distribution Detection for Neural Networks
Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang, Somesh Jha
arXiv 2020
Representation Bayesian Risk Decompositions and Multi-Source Domain Adaptation
Xi Wu, Yang Guo, Jiefeng Chen, Yingyu Liang, Somesh Jha, Prasad Chalasani
arXiv 2020
Rearchitecting Classification Frameworks For Increased Robustness
Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu
arXiv 2019
Revisiting Differentially Private Regression: Lessons From Learning Theory and their Consequences
Xi Wu, Matthew Fredrikson, Wentao Wu, Somesh Jha, Jeffrey F. Naughton
arXiv 2015