Publications

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

people standing in front of a screen with images and a chipboard

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
1 - 15 of 10822 publications
    mmMUSE: An mmWave-based Motion-resilient Universal Speech Enhancement System
    Chenming He
    Yanyong Zhang
    Kai Wang
    Dequan Wang
    Lingyu Wang
    the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), ACM (2026) (to appear)
    Preview abstract Voice-based smart systems can greatly enhance user experiences by allowing higher-quality interactions through better voice perception. Speech enhancement can benefit such systems by isolating noise from speech. Recently, integrating millimeter-wave (mmWave) with audio for speech perception has gained increasing attention due to microphones' limitations in noisy environments. However, mmWave-based vocal extraction is severely affected by motion, which disperses vocal signals across ranges and introduces distortions. In this paper, we propose an mmWave-based motion-resilient universal speech enhancement system called mmMUSE, which fuses mmWave and audio signals. To mitigate motion interference, we develop a Doppler-based method for motion-robust vocal signal extraction. Moreover, by introducing the Vocal-Noise-Ratio metric to assess the prominence of vocal signals from mmWave, we achieve real-time voice activity detection that gains 3.81 dB of SISDR in noisy speeches. Additionally, we design a two-stage complex-valued network that includes an attention-based fusion network for cross-modal complementing and a time-frequency masking network for correcting amplitude and phase of speech to isolate noises. Using mmWave and audio datasets from 46 participants, mmMUSE outperforms the state-of-the-art speech enhancement models, achieving an average SISDR improvement of 3.12 dB. Additionally, mmMUSE achieves SISDR improvements of 16.51 dB, 17.93 dB, 14.93 dB, and 18.95 dB in controlled environments involving intense noise, extensive motion, multiple speakers, and various obstructive materials, respectively. Finally, we evaluate mmMUSE in real-world scenarios including running, public spaces, and driving, maintaining a word error rate (WER) below 10%. View details
    Productionizing Quantum Mass Production
    Bill Huggins
    Nathan Wiebe
    arXiv for now (2026) (to appear)
    Preview abstract For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step. View details
    FreshBrew: A Benchmark for Evaluating AI Agents on Java Code Migration
    Diganta Misra
    Yanqi Luo
    Anjali Sridhar
    Justine Gehring
    Silvio Soares Ribeiro Junior
    2026
    Preview abstract AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization. View details
    Scalability of Generative AI Models: Challenges and Opportunities in Large-Scale Data Generation and Training
    International Journal of Computer Science and Information Technology Research (IJCSITR) (2025)
    Preview abstract Scalability of Generative AI Models: Challenges and Opportunities in Large-Scale Data Generation and Training View details
    Mutual Prediction in Human-AI Coevolution
    Chloe Loewith
    Antikythera Digital Journal (2025)
    Preview abstract Evolutionary relationships between entities within an ecological niche are characterised by varying degrees of interdependence and resulting forms of symbiotic, predatory or competitive behaviors. This paper hypothesizes that mutual prediction is a defining factor in the kind of relationship which forms between entities, as well as the power distribution and stability of that relationship. Throughout history, humans have engaged in complex mutually predictive relationships with the animals we domesticate, the plants we eat and the tools we create. We have generally had a better predictive model of the entities we have co-evolved with than they have had of us. In AI we encounter the first entity which may be able to predict us - including our thoughts, beliefs, feelings and plans - better than we can predict it. The current state of human predictive advantage may give way to predictive equilibrium or even human out-prediction by AIs. This paper defines a classification system for degrees of mutual prediction in human-AI interactions ranging from rules-based prediction through to a speculative capacity for mindreading, and uses the classification as axes to map human predictive ability against AI predictive ability. Past, present, and speculated future relationships between humans and AIs are plotted on the map, encompassing cases of predictive imbalance in both directions and exploring the implications of mutual prediction for human-AI coevolutionary paths. The examples highlight possible sources of human-AI misalignment and the mutual prediction framework provides a lens through which to understand AI systems as part of evolutionary processes at large. View details
    Improving Informally Romanized Language Identification
    Adrian Benton
    Christo Kirov
    Proceedings of EMNLP (2025) (to appear)
    Preview abstract The Latin script is often used informally to write languages with non-Latin native scripts. In many cases (e.g., most languages in India), there is no orthography, meaning that there is no conventional spelling of words in the Latin script, hence there will be high spelling variability in written text. Such romanization can render languages that are normally easily distinguished based on script highly confusable, such as Hindi and Urdu. In this work, we present methods to improve language identification of romanized text by improving methods to synthesize training sets. We find that training on synthetic samples which incorporate natural spelling variation yields higher language identification system accuracy than including available naturally occurring examples in the training set or even training higher capacity models. We demonstrate new state-of-the-art language identification performance on romanized text from 20 Indic languages in the Bhasha-Abhijnaanam evaluation set (Madhani et al., 2023a), improving test F1 from the reported 74.7% (using a pretrained neural model) to 85.4% using a linear classifier trained solely on synthetic data and 88.2% when also training on available harvested text. View details
    Preview abstract Inference-time scaling has been successful in enhancing large language model (LLM) performance by increasing computation at test time, but it often relies on external verifiers or is not optimized for manageable computational budgets. To address these, we propose DynScaling, which addresses these limitations through two primary innovations: an integrated parallel-sequential sampling strategy and a bandit-based dynamic budget allocation framework. The integrated sampling strategy unifies parallel and sequential sampling by constructing synthetic sequential reasoning chains from initially independent parallel responses, promoting diverse and coherent reasoning trajectories. The dynamic budget allocation framework formulates the allocation of computational resources as a multi-armed bandit problem, adaptively distributing the inference budget across queries based on the uncertainty of previously sampled responses, thereby maximizing computational efficiency. By synergizing these components, DynScaling effectively improves LLM performance under practical resource constraints without the need for external verifiers. Experimental results demonstrate that DynScaling consistently surpasses existing verifier-free inference scaling baselines in both task performance and computational cost. View details
    A Scalable Framework for Evaluating Health Language Models
    Neil Mallinar
    Tony Faranesh
    Brent Winslow
    Nova Hammerquist
    Ben Graef
    Cathy Speed
    Mark Malhotra
    Shwetak Patel
    Xavi Prieto
    Daniel McDuff
    Ahmed Metwally
    (2025)
    Preview abstract Large language models (LLMs) have emerged as powerful tools for analyzing complex datasets. Recent studies demonstrate their potential to generate useful, personalized responses when provided with patient-specific health information that encompasses lifestyle, biomarkers, and context. As LLM-driven health applications are increasingly adopted, rigorous and efficient one-sided evaluation methodologies are crucial to ensure response quality across multiple dimensions, including accuracy, personalization and safety. Current evaluation practices for open-ended text responses heavily rely on human experts. This approach introduces human factors and is often cost-prohibitive, labor-intensive, and hinders scalability, especially in complex domains like healthcare where response assessment necessitates domain expertise and considers multifaceted patient data. In this work, we introduce Adaptive Precise Boolean rubrics: an evaluation framework that streamlines human and automated evaluation of open-ended questions by identifying gaps in model responses using a minimal set of targeted rubrics questions. Our approach is based on recent work in more general evaluation settings that contrasts a smaller set of complex evaluation targets with a larger set of more precise, granular targets answerable with simple boolean responses. We validate this approach in metabolic health, a domain encompassing diabetes, cardiovascular disease, and obesity. Our results demonstrate that Adaptive Precise Boolean rubrics yield higher inter-rater agreement among expert and non-expert human evaluators, and in automated assessments, compared to traditional Likert scales, while requiring approximately half the evaluation time of Likert-based methods. This enhanced efficiency, particularly in automated evaluation and non-expert contributions, paves the way for more extensive and cost-effective evaluation of LLMs in health. View details
    Preview abstract Mg(OH)2 holds potential as an alkalinity source for Ocean Alkalinity Enhancement (OAE). It is a current byproduct of desalination treatment through the alkalinity exchange of electrochemically derived NaOH to the Mg-rich reverse osmosis reject brine. Characterization found no chemical composition difference among seawater-precipitated and industrial sourced Mg(OH)2 with both having high (>98%) purity. Differences were found with the crystallinity with industrial sources containing a higher degree of crystallinity of 0.83-0.85 compared to 0.16-0.33 for seawater-precipitated paste. Mg(OH)2 with a higher degree of crystallinity (>80%) had significantly slower dissolution rates than Mg(OH)2 with a lower degree of crystallinity (<20%). Results revealed that there is a strong inverse relation between degree of crystallinity and dissolution rate of both seawater-precipitated and industrial sourced Mg(OH)2. Seawater39 precipitated Mg(OH)2, with its similar purity to industrial sources yet faster and more complete dissolution and alkalinity release, could hold an advantage over other alkalinity sources for OAE applications with its seemingly tunable dissolution kinetics. View details
    Preview abstract We show that there is no randomized LOCAL algorithm for maximal matching (MM) that takes o(min(log D, sqrt(log n))) rounds, even on regular graphs and trees. This improves upon the KMW bound from 21 years ago and shows a surprising separation between MM and MIS on trees, among other implications. View details
    Consideration on CMAS arriving as discrete particles
    Eric H. Jordan
    Stephen Jordan
    Hiram Diaz
    Byung-gun Jun
    (2025)
    Preview abstract Turbine contaminants known as CMAS mostly arrive as individual particles in a range of mineral compositions to turbine hot sections where they are deposited and within a small area can be treated as arriving at random locations as splats. By the time the particles reach the hot section the particle size maximum is believed to be 10 microns. Using a simplified heat transfer analysis suggests the arriving temperature will be the turbine inlet temperature. Using AFRL03 as a representative set of possible minerals, for most turbine inlet temperatures a mixture of melted and un-melted particles will arrive. There are 31 combinations of the 5 minerals of AFRL03 presenting a wide range of melting points experimentally investigated in this paper. As expected, combinations generally melt at lower temperatures than the highest melting mineral in each combination. The progression of conditions starting with the arrival of isolated individual minerals is modeled using monte carlo simulations and known materials from percolation theory. This allows understanding of the development of coverage fraction and potential for mineral mixing important to melt behavior as a function of normalized CMAS dose. Using the normalized CMAS dose it is also possible to comment on the likely relative fraction of coating life during which less than fully homogenized CMAS dominates behavior. It is noteworthy that 4 out of 5 minerals and 4 mineral combinations lack either calcium or silicon or both and also melt below 1300°C. Interaction in the early deposition stage involves non CMAS like chemistries. View details
    Preview abstract Tesseract is a Most-Likely-Error decoder designed for quantum error-correcting codes. Tesseract conducts a search through an graph on the set of all subsets of errors to find the lowest cost subset of errors consistent with the input syndrome. Although this set is exponentially large, the search can be made efficient in practice for random errors using A* along with a variety of pruning heuristics. We show through benchmark circuits for surface, color, and bivariate-bicycle codes that Tesseract is competitive with integer programming-based decoders at moderate physical error rates. Finally, we compare surface and bivariate bicycle codes using most-likely error decoding View details
    Simulation-Based Inference: A Practical Guide
    Michael Deistler
    Jan Boelts
    Peter Steinbach
    Guy Moss
    Thomas Moreau
    Manuel Gloeckler
    Pedro L. C. Rodriguez
    Julia Linhart
    Janne K. Lappalainen
    Benjamin Kurt Miller
    Pedro J. Goncalves
    Cornelius Schröder
    Jakob H. Macke
    arXiv (2025)
    Preview abstract A central challenge in many areas of science and engineering is to identify model parameters that are consistent with empirical data and prior knowledge. Bayesian inference offers a principled framework for this task, but can be computationally prohibitive when models are defined by stochastic simulators. Simulation-Based Inference (SBI) provides a suite of methods to overcome this limitation and has enabled scientific discoveries in fields such as particle physics, astrophysics and neuroscience. The core idea of SBI is to train neural networks on data generated by a simulator, without requiring access to likelihood evaluations. Once trained, the neural network can rapidly perform inference on empirical observations without requiring additional optimization or simulations. In this tutorial, we provide a practical guide for practitioners aiming to apply SBI methods. We outline a structured SBI workflow and offer practical guidelines and diagnostic tools for every stage of the process--from setting up the simulator and prior, choosing the SBI method and neural network architecture, training the inference model, to validating results and interpreting the inferred parameters. We illustrate these steps through examples from astrophysics, psychophysics, and neuroscience. This tutorial empowers researchers to apply state-of-the-art SBI methods, facilitating efficient parameter inference for scientific discovery. View details
    Preview abstract A high level talk about quantum computing at Google. I am giving an invited talk at the Kavli Frontiers of Science. *Please note that I am only using slides that have already been presented publicly by others on the team. All slides have already previously passed review.* View details
    Ethical Co-Development of AI Applications with Indigenous Communities
    Claudio Pinhanez
    Edem Wornyo
    (2025) (to appear)
    Preview abstract This course explores how researchers and practitioners can engage ethically with Indigenous communities when developing AI- and data-intensive applications. Some key issues such as fair engagement, legal constraints, reciprocity, and informed consent are discussed based on the examples drawn from the instructors’ experience. The course also examines good practices in terms of co-designing and co-development processes, data governance and sovereignty issues and systems, decolonial software licensing, and processes of technology transfer and appropriation. In its practical part, the course critically discusses examples and cases gathered from the audience to explore the diversity of issues and solutions when working with Indigenous communities. View details