Skip to main content
Menu

OxNLP talk abstracts

Talk abstracts

OXNLP 12 10 22 57 20 banner

8 March 2024

Panelists include Fazl Barez, Emanuele La Malfa

Panel discussion, on the theme of Benchmarking Large Language Models

This week we will have a special session for panel discussion, with the theme of Benchmarking Large Language Models. As Large Language Models are increasingly widely deployed in various areas, we are concerned with their capabilities and limitations. Benchmarking them is vital for understanding, but also poses a fundamental challenge given the black box nature of many models and their rapid capability improvements. Here, we want to raise the question to researchers with rich experience in this area and hear about their views.

We will be having Fazl Barez, Emanuele La Malfa, and other panelists to be confirmed later this week. We look forward to interacting with you and hope the discussion will be both interesting and insightful.

 

1 March 2024

Dr Guy Emerson, University of Cambridge

Truth conditions at scale, and beyond

Abstract: Truth-conditional semantics has been successful in explaining how the meaning of a sentence can be decomposed into the meanings of its parts, and how this allows people to understand new sentences. In this talk, I will first show how a truth-conditional model can be learnt in practice on large-scale datasets of various kinds (textual, visual, ontological), and how this provides empirical benefits compared to non-truth-conditional (vector-based) computational semantic models.

I will then take a step back to discuss the computational challenges of such an approach. I will argue it is (unfortunately) computationally intractable to reduce all kinds of language understanding to truth conditions, and so the truth-conditional account must be incomplete. To enable a more complete account, I will present a radically new approach to approximate Bayesian modelling, which rejects the computationally intractable ideal of using a single probabilistically coherent model. Instead, multiple component models are trained to mutually approximate each other. In particular, this provides a rigorous method for complementing a truth-conditional model with models for other kinds of language understanding. This new approach gives us a new set of tools for explaining how patterns of language use arise as a result of resource-bounded minds interacting with a computationally demanding world.

Links:

https://aclanthology.org/2023.emnlp-main.542/ 

https://aclanthology.org/2022.acl-long.275/

https://aclanthology.org/2023.starsem-1.37/

23 February 2024

Lionel Wong (PhD Student, MIT Brain and Cognitive Sciences)

From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

https://arxiv.org/abs/2306.12672

Abstract: We think about the language that we hear. We can spend minutes or hours figuring out how to answer a question. We make plans on the basis of instructions or commands (or innocent queries about our long term goals), which might encompass dozens of fine-grained motor actions or abstract plans stretching years ahead. We imagine events in the past, the future, and wholly fictional universes on the basis of words. How do we derive meaning from language -- and how does the meaning that we get out of language go on to inform so much of our downstream thought, from planning and mental simulation to physical and downstream reasoning?

In this talk, we introduce rational meaning construction, a computational framework that models language-informed thinking by integrating learned, distributional language models with structured probabilistic languages for world modeling and inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)--a general-purpose symbolic substrate for generative world modeling. In this talk, I’ll discuss how this framework can be concretely implemented to model how language can update our beliefs and drive downstream inferences; relate language to other core domains of cognition; and even model how we might construct new concepts and theories from what we hear in language.

 

16 February 2024

Daniella Ye, Hunar Batra

Highlights of NeurIPS 2023

Papers:

  1. Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment [https://arxiv.org/abs/2306.08877]
  2. An Emulator for Fine-Tuning Large Language Models using Small Language Models [https://arxiv.org/abs/2310.12962]
  3. Fine-Grained Human Feedback Gives Better Rewards for Language Model Training [https://openreview.net/forum?id=CSbGXyCswu&ref=ruder.io]
  4. Direct Preference Optimization: Your Language Model is Secretly a Reward Model [https://arxiv.org/abs/2305.18290]
  5. Language Models Don’t Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting [https://arxiv.org/abs/2305.04388]
  6. Are Emergent Abilities of Large Language Models a Mirage? [https://arxiv.org/abs/2304.15004]

Hunar Batra: Hunar is a DPhil Computer Science student at the University of Oxford working on Large Vision Language Models. Her research explores aligning these models towards improved transparency of Chain-of-Thought reasoning and Visual Reasoning by augmenting reasoning architectures that interlink the models' true reasoning processes mechanistically and behaviourally to give better insights into their actions and interpreting the world model learnt by them. She is a Visiting Researcher at NYU Alignment Research group and Research Consultant at Anthropic working on mitigating biased and ignored reasoning in LLMs. In the past, she worked on aligning language models and exploring multiversal generational dynamics of LLMs as a Stanford Existential Risks Initiatives ML Scholar, and trained language models to predict SARS-CoV-2 mutations for her MSc CS dissertation at Oxford.


Daniella Ye: Daniella Ye (Zihuiwen Ye) is a DPhil Computer Science student at the University of Oxford. Her research focuses on exploring alternative ways for language generation such as employing diffusion models to improve long range text cohesion via non-autoregressive planning. In the past, she worked on improving text-to-code generation with self-play. She is currently doing an internship at Cohere. 

9 February 2024

Fangru Lin, Emanuele La Malfa

Paper 1: Graph-enhanced Large Language Models in Asynchronous Plan Reasoning

http://arxiv.org/abs/2402.02805

Speaker: Fangru Lin

Abstract: Reasoning about asynchronous plans is challenging since it requires sequential and parallel planning to optimize time costs. Can large language models (LLMs) succeed at this task? Here, we present the first large-scale study investigating this question. We find that a representative set of closed and open-source LLMs, including GPT-4 and LLaMA-2, behave poorly when not supplied with illustrations about the task-solving process in our benchmark AsyncHow. We propose a novel technique called Plan Like a Graph (PLaG) that combines graphs with natural language prompts and achieves state-of-the-art results. We show that although PLaG can boost model performance, LLMs still suffer from drastic degradation when task complexity increases, highlighting the limits of utilizing LLMs for simulating digital devices. We see our study as an exciting step towards using LLMs as efficient autonomous agents.

Bio: Fangru is a DPhil Linguistics student at the University of Oxford. Her research focuses on using linguistically informed methods to assess and improve state-of-the-art LLMs’ performance in real-world tasks such as planning and using LLMs as autonomous agents. She is especially interested in the application of neuro-symbolic methods to general artificial intelligence. Industrially, she previously interned as a software engineer at Microsoft. For more information, please see Fangru's page.

Paper 2: Code Simulation Challenges for Large Language Models

https://arxiv.org/abs/2401.09074

Speaker: Emanuele La Malfa

Abstract: We investigate the extent to which Large Language Models (LLMs) can simulate the execution of computer code and algorithms. We begin by looking at straight line programs, and show that current LLMs demonstrate poor performance even with such simple programs -- performance rapidly degrades with the length of code.
We then investigate the ability of LLMs to simulate programs that contain critical paths and redundant instructions. We also go beyond straight line program simulation with sorting algorithms and nested loops, and we show the computational complexity of a routine directly affects the ability of an LLM to simulate its execution.
We observe that LLMs execute instructions sequentially and with a low error margin only for short programs or standard procedures.
LLMs' code simulation is in tension with their pattern recognition and memorisation capabilities: on tasks where memorisation is detrimental, we propose a novel prompting method to simulate code execution line by line. Empirically, our new Chain of Simulation (CoSm) method improves on the standard Chain of Thought prompting approach by avoiding the pitfalls of memorisation.

Bio: Emanuele is a postdoctoral researcher at the University of Oxford: he currently works on benchmarking Large Language Models funded by the Alan Turing Institute. He completed his PhD at Oxford on robustness as a property for Natural Language Processing.

 

2 February 2024

Isabelle Lorge, Myeongjun Jang

Title: Highlights of EMNLP 2023

Isabelle and Myeongjun both attended EMNLP 2023 (6-10 December in Singapore.) as the first authors papers. On 2 Feb, they will give flash presentations of some other papers at the meeting that they thought were especially significant and interesting for the OxNLP group. The papers are:

  •      Fast and Accurate Factual Inconsistency Detection Over Long Documents
  •      Let’s Sample Step by Step: Adaptive-Consistency for Efficient Reasoning and Coding with LLMs
  •      Conceptual structure coheres in human cognition but not in large language models
  •      When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks
  •      MemeCap: A Dataset for Captioning and Interpreting Memes
  •      Dissecting Recall of Factual Associations in Auto-Regressive Language Models
  •      Towards Interpretable Mental Health Analysis with Large Language Models
  •      CLIMB – Curriculum Learning for Infant-inspired Model Building
  •      Emergent Linear Representations in World Models of Self-Supervised Sequence Models
  •      Rigorously Assessing Natural Language Explanations of Neurons
  •      Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism

26 January 2024

Paul Röttger
Postdoctoral Researcher
MilaNLP Lab at Bocconi University

Title: Helpful or Harmful? Evaluating the Safety of Large Language Models

Without proper safeguards, large language models will readily follow malicious instructions and generate toxic content. This risk motivates safety efforts such as red-teaming and large-scale feedback learning, which aim to make models both helpful and harmless. However, there is a tension between these two objectives, since harmlessness sometimes requires models to refuse to comply with unsafe prompts, and thus not be helpful. In my talk, I will engage with this tension in two ways. First, I will present our recent work on XSTest, a test suite for identifying exaggerated safety behaviours in large language models, where models refuse to comply with safe prompts if they superficially resemble unsafe prompts. Second, I will speak on general challenges in evaluating (the safety of) large language models, summarising recent negative perspectives, but also providing a more positive outlook.

19 January 2024

Hannah Kirk
DPhil student
University of Oxford

Title: Who decides how LLMs behave? Re-thinking the role of pluralistic human preferences in steering LLMs

Human feedback is a powerful tool to steer large language models (LLMs) to write and act like “us". Learning from human feedback has been the impetus for dramatic changes in the volume, variety and customisability of human-LLM interactions in the past year. This talk digs deeper into which humans are in the driving seat of LLM behaviour, and examines the current limitations in human feedback collection, particularly in terms of representation and conceptual clarity. While we mostly all agree that “alignment” is needed, no one can quite agree upon what it means in practice. To understand how different humans interpret what it means for an LLM to be aligned, I introduce the GRIFFIN Alignment Project: a large-scale dataset aimed at inclusive LLM alignment through granular, representative, and individualised feedback.  I conclude by addressing the challenges of greater personalisation, fragmentation and customisation in the LLMs of today and the agentic AI of tomorrow, advocating for robust governance and dynamic communication strategies.

1 December 2023

Anej Svete
PhD Student, ETH Zurich

Title: Understanding Language Models with Formal Language Theory: Recurrent Neural Language Models as Recognizers of (Probabilistic) Formal Languages

Language models (LMs) are currently at the forefront of NLP research due to their remarkable versatility across diverse tasks. Technologists have begun to speculate about such capabilities; among these speculations are claims that large LMs are general-purpose reasoners or even that they could be a basis for general AI. However, a large gap exists between their observed capabilities and our theoretical understanding of those. Luckily, in the context of computer science, the notion of reasoning is concretely defined—it refers to the ability to execute algorithms. Hence, if we study LMs in terms of well-understood formalisms characterizing the complexity of algorithms that can be solved, we can more precisely describe LMs’ abilities and limitations. 

With this motivation in mind, a large field of work has investigated the representational capacity of recurrent neural network (RNN) LMs in terms of their capacity to recognize formal languages—a well-established notion of the algorithmic capability of a formal model. In this talk, I will outline some classical results describing the representational capacity of RNNs. I will then connect this to the concrete task of language modeling, since LMs do not only describe unweighted formal languages. Rather, they define probability distributions over strings. We, therefore, pay special attention to the recognition of weighted formal languages and discuss what classes of probability distributions RNN LMs can represent. I will describe how simple RNN LMs with the Heaviside activation are equivalent to deterministic probabilistic finite-state automata, while ReLU-activated simple RNN LMs can model non-deterministic probabilistic finite-state automata and can, given unbounded computation time, even express any computable probability distribution.

 

24 November 2023

Scott Hale

Title: Whose LLM? Representation, bias, and applications to misinformation

Large-language models (LLMs) and generative AI could revolutionize computational social science, but their use also raises fundamental questions of representation and bias. Downstream users of LLMs need stronger evaluations in order to understand the languages, domains, and tasks within an LLM’s training and those which fall outside. This talk will present an overview of multiple studies. The first applies LLMs to identify misinformation narratives. The second demonstrates how current approaches to Reinforcement Learning from Human Feedback (RLHF) fail to address harmful, stereotypical outputs in non-Western contexts. Finally, the talk will present an in-progress experiment that aims to better understand what different people want from LLMs and how they perceive generative AI output.

 

17 November 2023

Harsha Nori
Microsoft

Title: Guidance — constrained sampling with formal grammars for controlling language models

Getting Language Models (LMs) to behave exactly the way we want them to can be challenging. Guidance is a popular open source library (14k+ stars) that combines the best of natural language prompting and code to help users structure prompts and express constraints. Guidance interfaces with many popular LM providers — both local, like HuggingFace and llama.cpp, and remote, like OpenAI — and provides a rich developer experience for constraining the output of LMs.

This talk will be a sneak preview of a major update to the guidance library. We’ll go over the basics of the guidance language, and — on the research side — discuss how guidance efficiently translates user specified constraints into formal grammars, which then make low level alterations to an LM sampling process. We’ll also discuss token healing — how subtle but important issues arise at prompt boundaries when translating from text to token space, and how guidance automatically heals these issues for users. We’ll end with a forward looking discussion on the future of constrained sampling.

 

10 November 2023

Ved Mathai
DPhil student, Oxford e-Research Centre, University of Oxford

Title: Sparks of Artificial General Intelligence: Early experiments with GPT-4

(https://arxiv.org/pdf/2303.12712.pdf)

Abstract: While GPT-4 has awed both academia and the world beyond with its seemingly human-like capabilities, this report by Microsoft makes early steps towards trying to grasp its true capabilities. The operative word here is 'sparks'. While GPT-4 may seem intelligent, its intelligence only appears in fleeting sparks. Existing benchmarks are not designed to measure creativity and there is always a chance that GPT-4 had ingested existing benchmarks to become good at them. The authors try to qualitatively judge its power with semi-systematic qualitative judgements, while asking in the abstract if we need a better definition of 'general intelligence' anyway. Since many in the reading group are interested in benchmarking the power of GPT-4, this report promises to provide the different aspects of 'intelligence' we should be testing when designing benchmarks for the different new-age large-language models.

 

3 November 2023

Marek Rei
Senior Lecturer of Machine Learning, Imperial College, London

 Marek Rai


Title: Interpretable architectures and guided attention for neural language models

Neural models of natural language have achieved remarkable results, but their interpretability remains an open issue. While they are able to output accurate predictions, it is unclear how they reached their decision and whether it was for the right reasons.

In this work, we investigate neural architectures for representing language that are inherently interpretable - they are able to point to relevant evidence in the input by themselves. This is achieved by careful use of attention, having the model dynamically make decisions about which input areas are important to consider. Furthermore, we can directly supervise this attention, teaching the model to make decisions based on the same evidence as humans.

        


27 October 2023

Aleksandar Petrov
DPhil student, Autonomous Intelligent Machines and Systems CDT, University of Oxford

Title: When do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations

Context-based fine-tuning methods like prompting, in-context learning, soft
prompting (prompt tuning) and prefix-tuning have gained popularity as they of-
ten match the performance of full fine-tuning with a fraction of the parameters.
Despite their empirical successes, there is little theoretical understanding of how
these techniques influence the internal computation of the model and their expres-
siveness limitations.

We show that despite the continuous embedding space being
much more expressive than the discrete token space, soft-prompting and prefix-
tuning are strictly less expressive than full fine-tuning. Concretely, context-based
fine-tuning cannot change the relative attention pattern over the content and can
only bias the outputs of an attention layer in a fixed direction. While this means that fine-tuning techniques such as prompting, in-context learning, soft prompting and prefix-tuning can successfully elicit or combine skills already present in the pretrained model, they cannot learn tasks requiring new attention patterns.

 

20 October 2023

Janet Pierrehumbert
Professor of Language Modelling, Deptartment of Engineering Science, University Oxford

Title: Mismatches between human language processing and NLP

NLP algorithms have achieved high performance on many tasks. However, this is often at the expense of exorbitant training. Even with such training, they fall short on certain tasks that humans perform easily and reliably. This talk will give an overview of some well-established core properties of human language processing. It will summarize the extent to which SOTA transformer models and generative models incorporate, or fail to incorporate, these properties.

 

16 June 2023

Jingwei Ni. from ETH & UZH

Title: When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLP 

Abstract: Multi-task learning (MTL) aims at achieving a better model by leveraging data and knowledge from multiple tasks. However, MTL does not always work – sometimes negative transfer occurs between tasks, especially when aggregating loosely related skills, leaving it an open question when MTL works. Previous studies show that MTL performance can be improved by algorithmic tricks. However, what tasks and skills should be included is less well explored. In this work, we conduct a case study in Financial NLP where multiple datasets exist for skills relevant to the domain, such as numeric reasoning and sentiment analysis. Due to the task difficulty and data scarcity in the Financial NLP domain, we explore when aggregating such diverse skills from multiple datasets with MTL can work. Our findings suggest that the key to MTL success lies in skill diversity, relatedness between tasks, and choice of aggregation size and shared capacity. Specifically, MTL works well when tasks are diverse but related, and when the size of the task aggregation and the shared capacity of the model are balanced to avoid overwhelming certain tasks.

 

26 May 2023

Pushpak Bhattacharyya
Bhagat Singh Rekhi Chair Professor of Computer Science and Engineering at IIT Bombay

Title: Natural Language Processing and Mental Health

Abstract: As per WHO, the number of mental health patients all over the world is about 1000 million, and about 14% of deaths in the world are due to mental disorders. The ratio of doctors to patients in the case of mental health support is about 1:10000. This situation underlines the need for automation in mental health monitoring. In this talk, we present our ongoing work on using natural language processing (NLP) and machine learning (ML) for detecting mental conditions and also generate positive and reassuring statements that can pull a person from the brink of taking extreme steps. The data sets, classification, and text generation framework will be described pointing to rich possibilities of future work.

Bio: Prof Pushpak Bhattacharyya is Bhagat Singh Rekhi Chair Professor of Computer Science and Engineering at IIT Bombay. He has done extensive research in Natural Language Processing and Machine Learning. Some of his noteworthy contributions are Indian Language NLP like IndoWordnet, Cognitive NLP, Low Resource MT, and Knowledge Graph-Deep Learning Synergy in Information Extraction and Question Answering. He served as President of the ACL (Association of Computational Linguistics) in 2016.

 

19 May 2023

William Wang
Associate Professor of Computer Science
University of California, Santa Barbara

Title: On bias, trustworthiness, and safety of language models

Bio: William Wang is the Co-Director of UC Santa Barbara's Natural Language Processing group and Center for Responsible Machine Learning. He is the Duncan and Suzanne Mellichamp Chair in Artificial Intelligence and Designs, and an Associate Professor in the Department of Computer Science at the University of California, Santa Barbara. He has published more than 100 papers at leading NLP/AI/ML conferences and journals, and received best paper awards (or nominations) at ASRU 2013, CIKM 2013, EMNLP 2015, and CVPR 2019, a DARPA Young Faculty Award (Class of 2018), an IEEE AI's 10 to Watch Award (Class of 2020), and many more. He frequently serves as an Area Chair or Senior Area Chair for NAACL, ACL, EMNLP, and AAAI. He is an elected member of IEEE Speech and Language Processing Technical Committee (2021-2023) and a member of ACM Future of Computing Academy.

 

12 May 2023

Edoardo Ponti
Lecturer in Natural Language Processing
University of Edinburgh

Title: Modular Deep Learning

Abstract: "Transfer learning has recently become the dominant paradigm of machine learning. Pre-trained models fine-tuned for downstream tasks achieve better performance with fewer labelled examples. Nonetheless, it remains unclear how to develop models that specialise towards multiple tasks without incurring negative interference and that generalise systematically to non-identically distributed tasks. Modular deep learning has emerged as a promising solution to these challenges. In this framework, units of computation are often implemented as autonomous parameter-efficient modules. Information is conditionally routed to a subset of modules and subsequently aggregated.

These properties enable positive transfer and systematic generalisation by separating computation from routing and updating modules locally. In this talk, I will introduce a general framework for modular neural architectures, providing a unified view over several threads of research that evolved independently in the scientific literature. In addition, I will provide concrete examples of their applications: 1) cross-lingual transfer by recombining task-specific and language-specific task sub-networks; 2) knowledge-grounded text generation by Fisher-weighted addition of modules promoting positive behaviours (e.g. abstractiveness) or negation of modules promoting negative behaviours (e.g. hallucinations); 3) generalisation to new NLP and RL tasks by jointly learning to route information to a subset of modules and to specialise them towards specific skills (common sub-problems reoccurring across different tasks).

Banner artwork credit: Shirley Steele https://www.steelestudio.com