Case Study: Designing a Natural‑Language Database and Cross‑Referencing System

Brightly colored network nodes connected by lines in an abstract pattern

Overview

I wanted to explore whether natural language could function as a full interface for structured data design — not just for querying information, but for creating, populating, and analyzing a database.
What emerged was a prompting pattern that allowed me to build a media library, enrich it with metadata, and then cross‑reference it with an external dataset (the Lectionary for Advent 2025) to generate meaningful thematic insights.

This case study demonstrates how prompting can serve as a semantic database layer, enabling complex reasoning without code, schemas, or traditional tooling.


1. Problem

I needed a way to:

  • build a structured media library
  • enrich each entry with metadata
  • maintain consistency across entries
  • perform cross‑dataset analysis
  • surface thematic relationships between unrelated domains

I wanted to do all of this using only natural language, without switching tools or writing code.

The question was simple:
Can prompting alone support database‑level structure and reasoning?


2. Context

The experiment began with a straightforward request:
“Create a media library.”

From there, I added items one by one.
For each new entry, I asked the model to:

  • fetch metadata
  • normalize attributes
  • maintain consistent structure
  • update the dataset

This created a living, evolving database — entirely through conversation.

Once the library was populated, I introduced a second dataset:
the Lectionary readings for Advent 2025.

My goal was to see whether the model could:

  • interpret both datasets
  • identify thematic resonance
  • cross‑map concepts
  • produce a meaningful match

3. My Role

I acted as a prompt architect, responsible for:

  • defining the structure of the media library
  • guiding the model to populate metadata consistently
  • maintaining schema integrity through natural language
  • designing the cross‑reference prompt
  • evaluating the reasoning behind the output

I wasn’t “chatting.”
I was designing a system through conversation.


4. Approach

A. Natural‑Language Schema Design

I began by defining the core attributes of each media item:

  • title
  • creator
  • format
  • year
  • themes
  • genre
  • notable motifs

I didn’t write a schema — I described one.
The model inferred the structure and maintained it.

B. Metadata Enrichment

For each new entry, I asked the model to:

  • fetch metadata
  • normalize fields
  • maintain consistency
  • update the dataset

This created a stable, structured library without any manual formatting.

C. Cross‑Dataset Reasoning

Once the library was complete, I introduced the Lectionary readings.

I designed a prompt that asked the model to:

  • interpret the themes of the Advent passages
  • interpret the themes of each media item
  • identify conceptual resonance
  • justify the match

This required multi‑layer reasoning across two unrelated domains.


5. Decisions & Tradeoffs

Decision: Use natural language instead of formal schema tools

This allowed for rapid iteration and conceptual flexibility, but required careful prompting to maintain consistency.

Decision: Treat metadata as a semantic layer

Rather than focusing on technical attributes, I emphasized thematic and narrative metadata — the kind that supports cross‑domain reasoning.

Tradeoff: Ambiguity vs. expressiveness

Natural language is expressive but imprecise.
The solution was to use structured phrasing within conversational prompts.


6. Outcome

The model identified To Pimp a Butterfly as the media item most thematically aligned with the Advent 2025 readings.

This wasn’t a novelty result — it was a demonstration of:

  • semantic mapping
  • thematic reasoning
  • cross‑domain pattern recognition
  • emergent insight

The model connected:

  • lamentation
  • liberation
  • prophetic critique
  • hope in the face of suffering
  • communal longing
  • eschatological themes

…across two datasets that were never designed to interact.

This proved that:

Natural language can serve as a full interface for database creation, enrichment, and cross‑analysis — enabling complex reasoning without code.


7. What This Demonstrates About My Work

I design systems through language.

I don’t need formal schemas to build structured data — I can architect them conversationally.

I create prompts that support multi‑layer reasoning.

This case required the model to interpret, compare, and synthesize across domains.

I use prompting as a cognitive tool, not a query tool.

The goal wasn’t retrieval — it was insight.

I understand how to shape model behavior.

The consistency of the metadata and the quality of the cross‑reference were the result of intentional prompting patterns.

I treat AI as a collaborator.

This wasn’t automation.
It was co‑construction.

Leave a comment