TextGrad: Automatic ''Differentiation'' via Text

TextGrad is a Python package that provides a simple interface to implement LLM-“gradients” pipelines for text optimization!

Paper

Read our research paper with methodologies and experimental results.

Source code

Visit our GitHub repository to access all the source code.

API Documentation

Get an in-depth understanding of each function and feature.

Applications

Discover the wide range of practical applications and case studies!

About TextGrad

If you know PyTorch, you basically already know how to use TextGrad! It mirrors the syntax and abstraction of PyTorch.

TextGrad can achieve many tasks, including:

  • LeetCodeHard best score
  • GPQA sota
  • Designs new molecules
  • Improves treatments

TextGrad flexibly optimizes any system of agents + tools. Many such systems involve blackboxes hard to tune by standard gradients, but easy to optimize via text gradient. The "gradients" here are natural language feedback that are easy to interpret!

Start using TextGrad

Checkout this great intro video from code AI:

Successful Applications

Here are examples of successful implementations that demonstrate the effectiveness and versatility of TextGrad across various applications.

Coding

We optimize solutions to difficult coding problems from LeetCode, where we boost the performance of gpt-4o and best existing method by 20% relevant performance gain.

Problem solving

We optimize solutions to complex scientific questions to improve the zero-shot performance of GPT-4o. For instance, in Google-Proof Question Answering bench-mark, we improve the zero-shot accuracy from 51% to 55% by refining the solutions at test-time.

Reasoning

We optimize prompts to improve the LLM performance, where we push the performance of GPT-3.5 close to GPT-4 in several reasoning tasks.

Chemistry

We design new small molecules with desirable druglikeness and in silico binding affinity to drug targets.

Medicine

We optimize radiation treatment plans for prostate cancer patients to achieve desirable target dosage and reduce side effects.

TextGrad Team

Mert Yuksekgonul

Stanford University

Federico Bianchi

Stanford University

Joseph Boen

Stanford University

Sheng Liu

Stanford University

Zhi Huang

Stanford University

Carlos Guestrin

Stanford University

James Zou

Stanford University