Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trygravity.ai/llms.txt

Use this file to discover all available pages before exploring further.

The request flow

  1. The user sends a message. Your client fires the LLM request and calls gravityContext() to capture device + session info.
  2. Your server receives both and fires gravity.getAds() in parallel with the LLM call. The ad request never blocks streaming.
  3. Gravity’s engine matches the conversation against active campaigns, runs an auction, and returns the winning ad.
  4. Your stream appends the ad as a final chunk. Your client shows it.
Everything happens in a few hundred milliseconds end-to-end. Since it runs in parallel with your LLM call, user-perceived latency doesn’t move.

Contextual matching

When a user asks “how do I deploy a serverless Postgres database?”, the engine looks at the recent conversation and returns an ad whose campaign targets that topic — say, a managed Postgres provider. The ad copy is generated on the fly to fit the conversation, so the user sees something relevant and specific rather than a generic banner. There’s no persistent user profile driving this match. It lives and dies with the conversation.

What gets paid out

CPM (cost per 1,000 impressions)

You get paid when the ad becomes visible to the user. CPM is the public billing model on Gravity today.

What’s next

AI platform quickstart

Build the integration end-to-end.

Core concepts

Glossary of the terms used throughout the docs.