SHIPMRR / DOCS Return to System

ShipMRR Docs.

Welcome to the comprehensive, developer-first guide to deploying the ShipMRR boilerplate. Stop wrestling with setup and start building autonomous intelligence.

1. Introduction

ShipMRR is not just another SaaS boilerplate. It is tailored specifically for AI-Native B2B Applications.

While traditional boilerplates give you basic login and payments, ShipMRR is designed to handle the heavy lifting of modern AI applications:

  • Long-running AI tasks without serverless timeouts.
  • Storing and retrieving AI memory strictly mapping to RLS (Vector database/RAG).
  • Multi-tenant architecture (Organizations and Team Members) built-in, immediately ready for enterprise routing.

2. Tech Stack Overview

The definitive architecture running under the hood:

Next.js 16

The core React framework powering the frontend and backend App Router.

Tailwind CSS v4

Utility-first styling Engine governed by the 'Bold & Raw' preset.

Supabase

PostgreSQL Database, Auth, and Vector Storage (pgvector) for AI.

Stripe

Handles B2B subscriptions via Metered Billing for token usage.

Vercel AI SDK

Streams responses securely from OpenAI to the UI layer.

Inngest

Executes background jobs for long-running AI tasks safely.

3. Key Features Explained

🔑 Authentication (Supabase Auth)

We use Supabase for authentication. It supports powerful methods like Email/Password, Magic Links, and Social Logins. It is deeply integrated into Postgres Row Level Security (RLS) to enforce data boundaries.

🧠 Database & Vector Storage (pgvector)

Normal databases manage text. AI needs "understandings" of data called Vectors. Supabase runs the pgvector extension natively, allowing you to execute "Chat with your PDF" or Semantic Search locally within your DB schema.

âš¡ Background Jobs (Inngest)

Standard serverless functions timeout after 10-30 seconds. ShipMRR integrates Inngest to shunt heavy AI tasks to background workers, so your application continues running flawlessly while waiting for inference.

💳 AI-Native Billing (Stripe)

Pre-configured for Stripe Metered + Tiered Billing. Charge users precisely based on how many AI compute tokens they consume, or lock them into a heavy monthly retainer.

4. Project Structure

ShipMRR/ ├── src/ # Primary application logic │ ├── app/ # Next.js App Router (Pages & API) │ │ ├── admin/ # Internal admin oversight │ │ ├── api/ # Route Handlers (webhooks, ai chat) │ │ ├── dashboard/ # Protected user environment │ │ └── login/ # Auth entrypoints │ ├── components/ # Reusable UI & Logical blocks │ └── lib/ # Shared utilities (supabase, stripe) ├── supabase/ # Database configurations │ └── migrations/ # SQL schema definitions (RLS / Vectors) ├── public/ # Static assets └── package.json # Dependency manifest

5. Prerequisites & Initialization

Supabase Database Setup

  1. Go to Supabase and create a new project. Kopie your `URL` and `anon` key to `.env.local`.
  2. Go to the SQL Editor tab on the left sidebar.
  3. Open the file located at supabase/migrations/0001_waitlist_schema.sql in this codebase.
  4. Copy all the text in that SQL file, paste it into the Supabase SQL Editor and hit Run.

Stripe Billing Setup

  1. Create a Stripe account and enable "Test Mode".
  2. Copy your Publishable and Secret keys to `.env.local`.
  3. Go to Developers -> Webhooks and add an endpoint.
  4. If developing locally, run: stripe listen --forward-to localhost:3000/api/webhooks/stripe
  5. Copy the webhook signing secret (starts with whsec_) to STRIPE_WEBHOOK_SECRET in `.env.local`.

Third-Party Triggers & AI

  1. Create an OpenAI account. Copy an API key to OPENAI_API_KEY.
  2. Create a Resend account. Add your verified domain or use their test network. Copy the key to RESEND_API_KEY.
  3. Create an Inngest account. Copy your Event Key to INNGEST_EVENT_KEY to enable long-running background AI jobs.

Local Boot

  1. Install dependencies: npm install
  2. Run server: npm run dev
  3. Access localhost:3000 and log in to bypass the protection middleware.

6. Environment Map

VariablePurpose
NEXT_PUBLIC_SUPABASE_URLTarget route for database operations.
NEXT_PUBLIC_SUPABASE_ANON_KEYPublic key matching RLS security constraints.
SUPABASE_SERVICE_ROLE_KEYAdmin bypass key. Keep strictly secret.
STRIPE_SECRET_KEYExecutes charges and controls metered syncing.
OPENAI_API_KEYAuthorizes vector embeddings and text generation.
RESEND_API_KEYAuthorizes transactional email transmission.

7. Deployment (Vercel)

Immediate production rollouts using Vercel architecture:

  1. Commit all changes and target push to a GitHub repository.
  2. Access your Vercel Dashboard and initialize a New Project.
  3. Import the target repository.
  4. Map all identical values from your .env.local into the Vercel Environment Variables configuration panel.
  5. Execute Deploy. Vercel will process the Next.js build command and output a production URL.

8. F.A.Q.

Do I need to be an AI architect to use this?

Negative. We utilize the Vercel AI SDK. Transmitting a prompt to OpenAI is handled entirely via React hooks using simple string values.

Why do generative UI chunks occasionally time out?

If you circumvent the Inngest background workers for heavy operations, serverless functions force a timeout (30s on hobby tier). Always route data-heavy RAG operations through background execution.

How do I modify the 'Bold & Raw' aesthetics?

Target src/app/globals.css. Adjust the raw HEX properties mapped to the CSS syntax variables. The entire application compiles those targets immediately.

Where does user data exist physically?

Safely within your designated Supabase PostgreSQL instance. You maintain absolute control over the schema and user auth tables.

System EOF

Happy Shipping. Welcome to the AI-Native iteration of SaaS.