Skip to main content
A simulation is a research study run against a persona group. You choose a method, supply a prompt or stimulus, and Boses runs each persona through the study independently — then synthesises the individual results into an aggregate report. Simulations run as background tasks. After you create one, Boses returns immediately and processes the work asynchronously. You poll GET /simulations/{id} to check progress and retrieve results when complete.

Simulation lifecycle

Every simulation moves through the following states:
pending → running → generating_report → complete
                                      → failed
                                      → aborted
StatusWhat it means
pendingSimulation created, queued for processing
runningPersonas are actively being interviewed or surveyed
generating_reportIndividual results are complete; aggregate report is being synthesised
completeAll results available
failedAn error occurred during processing
abortedYou cancelled the simulation before it finished
In-depth interview (manual) sessions use an additional active status while you are conversing with the persona. The session moves to generating_report when you call the end endpoint.
Poll for status:
curl https://api.temujintechnologies.com/api/v1/projects/<PROJECT_ID>/simulations/<SIM_ID> \
  -H "Authorization: Bearer <TOKEN>"
The progress field in the response shows per-persona completion during long runs, so you can display live progress to your users.

Simulation types

Boses supports six simulation types. Each one models a distinct research methodology.
Type valueMethodBest for
concept_testConcept testFast directional read on a new product, campaign, or messaging idea
surveyStructured surveyQuantifiable preference data across a defined set of questions
focus_groupFocus groupEmergent consensus, disagreement, and social dynamics within a group
idi_aiIn-depth interview (AI-moderated)Uncovering unarticulated needs or emotional responses with probe follow-ups
idi_manualIn-depth interview (manual)Exploring a specific hypothesis by driving the conversation yourself
conjointConjoint analysisUnderstanding which product attributes drive choice and at what price point
The most common starting point. Each persona reads your prompt and responds with:
  • A reaction written in their own voice
  • A sentiment (Positive / Neutral / Negative) with a numeric score
  • Key themes they noticed
  • A notable quote suitable for presentations
The aggregate result includes a sentiment distribution across the group, the top themes that surfaced, and strategic recommendations.
You define questions in a survey_schema (Likert, multiple choice, or open-ended), and each persona answers every question independently. Upload a .txt or .docx file and Boses parses it into the schema automatically. Good for collecting structured preference data you can quantify.
Personas interact as a group across multiple discussion rounds, moderated by an LLM. This captures emergent consensus, disagreement, and social dynamics that individual interviews miss — particularly useful for evaluating social acceptability or messaging that depends on peer influence.
A free-form, multi-turn interview of each persona. The AI moderator probes for depth using a script you provide (.txt or .docx). Best for uncovering emotional responses or unarticulated needs that structured questions cannot surface.
You drive the conversation directly with a single persona. Use this when you want to follow an unexpected thread from a previous simulation or test a specific hypothesis in real time. Send messages via POST /simulations/{id}/messages and end the session with POST /simulations/{id}/end to trigger report generation.
Choice-based trade-off analysis. You define product attributes and their levels, and personas choose between configurations. The results reveal which features drive selection — and at what price point the persona’s preference shifts. Submit your attribute design via POST /simulations/{id}/conjoint-design.

Result structure

Every simulation produces two layers of results, accessible at GET /simulations/{id}/results: Per-persona results — each persona’s individual reaction, sentiment score, key themes, notable quote, and (for IDI/focus group) a full transcript. Aggregate result — a synthesised view across all personas: sentiment distribution (percentage positive / neutral / negative), top themes shared across the group, and strategic recommendations.
curl https://api.temujintechnologies.com/api/v1/projects/<PROJECT_ID>/simulations/<SIM_ID>/results \
  -H "Authorization: Bearer <TOKEN>"
[
  {
    "id": "...",
    "result_type": "individual",
    "persona_id": "...",
    "sentiment": "positive",
    "sentiment_score": 0.74,
    "reaction_text": "This feels like it was made for someone like me...",
    "key_themes": ["affordable luxury", "skin tone relevance", "local ingredients"],
    "notable_quote": "Finally a serum that doesn't assume I want to look like someone else."
  },
  {
    "id": "...",
    "result_type": "aggregate",
    "persona_id": null,
    "sentiment_distribution": { "positive": 0.6, "neutral": 0.3, "negative": 0.1 },
    "top_themes": ["affordability", "local relevance", "trust signals"],
    "recommendations": "Lead with the local ingredient story. Price anchoring around ₱399 tested well..."
  }
]

Tips for best results

Run a concept test first. It is the fastest way to get a directional read on an idea. Use focus groups or in-depth interviews to go deeper on specific findings from the concept test.
The richer your prompt_question, the richer the persona responses. Include price points, descriptions of visual elements, and competitive context. A prompt like “What do you think of a brightening serum priced at ₱399, positioned against Pond’s?” will generate more useful output than “What do you think of this skincare product?”
If you need to validate how consistent your results are, use the reliability check endpoint (POST /simulations/{id}/reliability-check) to run the same simulation multiple times and compute a confidence score across sentiment agreement, distribution variance, and theme overlap.