I spent the week tightening up a new on-device commentary flow inside swiftbible, powered entirely by Apple Intelligence. Here’s what worked, what didn’t, and the guardrails I ended up shipping. If you want the deeper dive straight from Cupertino, the best companions are WWDC25 Session 286 and the Foundation Models documentation.

Streaming explanation powered by Apple Intelligence

Quick Links

Why Apple Intelligence Felt Right

  • Private by default – the commentary streams stay on-device, so no round trips to a server when someone taps “Explain.”
  • Latency – streaming responses feel instant when the model is local, which matters when you’re trying to keep someone inside a devotional flow.
  • Guided output – the @Generable macro lets the model feed directly into a shaped VerseExplanationGeneration struct without building a giant prompt parser.

Lessons Learned

1. Gate The Feature Before You Dream

Apple’s sample code (and my own mistakes) drilled this in: always check availability, then fall back. When the model isn’t ready I finish the async stream with a friendly error so the UI can show a toast instead of hanging forever.

func streamExplanation(for request: VerseExplanationRequest)
  -> AsyncThrowingStream<String, Error> {
    if #available(iOS 26.0, macOS 26.0, macCatalyst 26.0, visionOS 2.0, *) {
        return AppleFoundationModelServiceImplementation.shared
            .streamExplanation(for: request)
    }

    return AsyncThrowingStream { continuation in
        continuation.finish(
            throwing: AppleFoundationModelServiceError.modelUnavailable(
                reason: "Requires iOS 26, macOS 26, macCatalyst 26, or visionOS 2."
            )
        )
    }
}

2. Tune Instructions and Generation Options Together

Short, role-focused instructions land better than a novel-length system prompt. I kept mine to one sentence about being a “pastoral Bible commentary assistant,” then tuned options for warmth without runaways:

self.generationOptions = GenerationOptions(
    sampling: nil,
    temperature: 0.7,
    maximumResponseTokens: 700
)

The 0.7 temperature gives enough color, while the token cap keeps the stream from meandering for a full sermon.

3. Prompt Engineering Is Still The Job

I fought the “single paragraph” output plague until I gave the model real examples. The final prompt includes two full-length commentaries (John 3:16 and Psalm 23:4) with blank lines, history, theology, and application. That few-shot pattern finally convinced the model to produce beautiful, multi-paragraph responses every time.

4. Tell People What’s AI (And Where It Ran)

  • Verse explanations show a “sparkles” badge with a tap-to-reveal message (on device with Apple Intelligence).
  • Daily devotionals keep a different disclosure (Generated by GPT-5 mini via OpenAI) because they truly come from a Supabase Edge function.
  • I hide the translation badge whenever someone is reading the Apocrypha or the Book of Enoch—no need to pretend the KJV owns those passages.
HStack(spacing: 8) {
    Text(request.reference)
        .font(.headline)
    if request.shouldDisplayTranslationBadge {
        Text(request.translation.uppercased())
            .font(.caption)
            .fontWeight(.semibold)
            .padding(.horizontal, 10)
            .padding(.vertical, 4)
            .background(
                Capsule().fill(Color.secondary.opacity(0.12))
            )
    }
    Spacer(minLength: 0)
}

Under the Hood Enhancements

  • Structured Generable commentary – Apple’s @Generable schema now returns summary, context, theology, application, literary, and historical fields. We stream the partially generated structure and render each section live without fragile prompt parsing. – Apple’s @Generable schema now returns summary, context, theology, application, literary, and historical fields so the model fills a consistent pastoral template while we stream partials.
  • Canon-aware prompts – every request now tells the model whether the passage is Old Testament, New Testament, Apocrypha, or the Book of Enoch. That single line coaxes the commentary to mention extra-canonical status or covenant context without any manual post-processing.
  • Streaming delta logic – Apple’s snapshots include the full partial on every tick, so I diff against the previous string and append only the new delta. SwiftUI stays buttery smooth because the text view doesn’t redraw hundreds of words each frame.
  • Availability-aware UI – the “Try Again” button listens to an AvailabilityStatus enum, so it disables itself when the model is offline and unsupported OS versions get a concise explanation of the minimum requirement.
  • Reusable AI badge – a tiny AIGeneratedBadge component now decorates both on-device explanations and the Supabase devotional view. Tap it and you instantly know whether the words came from Apple Intelligence or GPT-5 mini.
  • Supabase hand-off – the daily devotional pipeline still runs through an Edge Function with OpenAI, but the UI labels that path clearly. Readers always know what happened locally versus in the cloud.

What I’d Like Next

  • Try adapters once Apple opens them up, so I can mix in custom lexical data (think: church history timelines).
  • Explore tool calling to let the model pull local commentaries when someone asks for deeper background.
  • Build a quick “prompt playground” directly inside the app so non-dev teammates can iterate on examples without shipping a new build.
  • Add a lightweight chat view so readers can ask follow-up questions about the freshly generated explanation without leaving the app.

Until then, I’m thrilled with how approachable Apple Intelligence feels when you follow their availability + prompt guidance. If you’re building your own commentary or study tool, grab the sample repo, sprinkle in the links above, and have fun watching the on-device model breathe life into your UI.