Why an open model
When we started Rhema, "AI and the Bible" was already a split conversation. One camp thought AI had no business anywhere near Scripture. The other camp was happy to ship anything that generated plausible-sounding verse commentary. Neither of those felt right to us.
Rhema's in-app chat runs on closed API models today, and it will for the foreseeable future, because that's what produces the best experience right now. But if you care about what your model has read, how it handles contested passages, and whether the church can inspect and correct it over time, you eventually have to own the weights. Someone has to start.
So we trained one. It's called BibleAI, and as of today the weights are on Hugging Face and Ollama under Apache 2.0. This is a proof of concept, not the model behind the Rhema app. Think of it as us showing our work: here is what a small, purpose-trained Bible model can look like when anyone can download it, run it locally, fine-tune it further, or build it into their own tools. No permission required, no API key.
What BibleAI is
BibleAI is a fine-tuned version of Google's Gemma 4 E4B, a compact model in the roughly 8B-parameter range. The training happened in three stages: continued pre-training on a corpus of Christian texts, supervised fine-tuning on curated instruction data, and direct preference optimization against human-ranked responses. That last stage was small and targeted, fewer than a thousand preference pairs, and it did more work than any other phase to shape how the model actually behaves in conversation.
We ship it in two sizes. The q8 quantization is 8 GB and runs on a laptop comfortably; on a recent iPhone or Android flagship it will run too, which is the part we find genuinely exciting. The bf16 version is 15 GB, full precision, and still fits on a reasonably specced laptop. Both have a 128K token context window, which is roughly enough room to fit the entire New Testament and still have space left for your question.
What it's trained to do (and refuse)
BibleAI answers questions about Scripture, theology, church history, and Christian practice. Ask it about John 6 or Arminius and it will talk. Ask it to write your resume and it will redirect you.
The refusal part matters more than the answer part. The model is trained to quote Scripture from explicit reference rather than from memory, to skip the Greek or Hebrew when it is not sure, and to admit when it does not know something. The worst thing a Bible model can do is confidently hallucinate a verse that does not exist, or invent a patristic quote that sounds credible. We spent most of our training cycles on exactly that failure mode.
How to try it
If you have Ollama installed, one command gets you running:
`ollama run robzilla/bibleai:q8`
That pulls the 8 GB quantization and drops you into a chat. No API key, no internet after the first download, no telemetry.
If you want the raw weights, they are on Hugging Face at `rhemabible/BibleAI`. Load them with the `transformers` library, quantize them however you like, host them wherever makes sense.
What this does not replace
A model is a starting point for study. It is not the study itself, and it is not a substitute for reading Scripture slowly, sitting under teaching, or the ordinary means of grace.
Cross-reference everything BibleAI tells you. That is true for this model, it is true for Rhema's in-app chat, and it is true for any AI tool you use in this space. The model can help you find threads. You still have to pull them.
What's next
This is version one of a proof of concept, and we are going to say the honest thing about it. It leans in places toward Reformed readings when it should present multiple traditions. It sometimes over-hedges on questions that have settled answers. It is fast but not as strong at long multi-step reasoning as the larger frontier models Rhema's in-app chat runs on today — and those frontier models are still doing the heavy lifting inside the app, because right now they produce the best study experience for users. BibleAI is the research track running alongside that.
If you run BibleAI and find something broken, or a useful pattern, or a place where it plainly gets theology wrong, tell us. Email hello@rhemabible.co. We read everything.
We plan to publish the dataset recipe and training code in the coming weeks. The weights came first on purpose: having the artifact in people's hands matters more than having the perfect recipe written up. The long-term goal is for the church to have a set of open, inspectable, correctable AI tools that it actually owns. BibleAI is the first step in that direction, not the destination.