Skip to main content
Surveys give you structured, quantifiable data from your persona group. Unlike a concept test — which captures open-ended reactions — a survey lets you define the exact questions you want answered: rating scales, multiple-choice preferences, and open-ended follow-ups. Every persona answers every question independently, so you get both per-persona response sets and aggregate distributions across the group. Use a survey when you need to:
  • Measure agreement with specific statements on a likert scale
  • Compare preference between a fixed set of options
  • Capture written explanations alongside structured answers
  • Produce results you can present as percentages or score averages

Question types

Boses supports three question types in a survey file:
TypeFormatAggregate output
likertNumeric rating on a defined scale (e.g., 1–5 or 1–7)Mean score, distribution by rating value
multiple_choiceOne selection from a list of optionsPercentage choosing each option
open_endedFree-text responseExtracted themes + representative quotes

Workflow

The survey simulation uses a two-step workflow: you upload your survey file first, review the parsed questions, then confirm and start the run.
1

Create the simulation

Create a survey simulation. This creates the simulation record but does not start it yet — you need to upload your survey file first.
curl -X POST https://api.temujintechnologies.com/api/v1/projects/<PROJECT_ID>/simulations \
  -H "Authorization: Bearer <TOKEN>" \
  -H "Content-Type: application/json" \
  -d '{
    "persona_group_id": "<GROUP_ID>",
    "simulation_type": "survey"
  }'
Save the id from the response as SIM_ID.
2

Upload your survey file

Upload your survey as a .txt or .docx file. Boses parses the file using an LLM to extract and classify each question by type. This does not start the simulation — it gives you a chance to review the parsed schema first.
curl -X POST https://api.temujintechnologies.com/api/v1/projects/<PROJECT_ID>/simulations/<SIM_ID>/survey \
  -H "Authorization: Bearer <TOKEN>" \
  -F "file=@brand_tracker_q2.docx"
The response returns the parsed survey_schema so you can verify that each question was classified correctly before proceeding.
Supported file types are .txt and .docx. Structure your file with one question per line or paragraph. For multiple-choice questions, list the options on the lines immediately following the question. For likert questions, note the scale in parentheses — for example, “Rate your agreement (1–5)”.
3

Confirm and start the run

Once you’re satisfied with the parsed schema, confirm it and start the simulation. Boses runs every persona through every question.
curl -X POST https://api.temujintechnologies.com/api/v1/projects/<PROJECT_ID>/simulations/<SIM_ID>/run \
  -H "Authorization: Bearer <TOKEN>"
The simulation moves to running status. Poll GET /simulations/<SIM_ID> until status is "complete".
4

Poll for completion

Check status while the simulation runs. The progress field tracks how many personas have completed the full questionnaire.
curl https://api.temujintechnologies.com/api/v1/projects/<PROJECT_ID>/simulations/<SIM_ID> \
  -H "Authorization: Bearer <TOKEN>"
{
  "id": "sim_01j...",
  "simulation_type": "survey",
  "status": "running",
  "progress": {
    "completed": 5,
    "total": 10
  }
}
5

Fetch the results

Once status is "complete", retrieve the full results — individual per-persona answers, aggregate distributions, and the executive summary.
curl https://api.temujintechnologies.com/api/v1/projects/<PROJECT_ID>/simulations/<SIM_ID>/results \
  -H "Authorization: Bearer <TOKEN>"

Understanding the results

Survey results are structured at two levels.

Per-persona answers

Each persona’s responses are stored individually, so you can filter by any demographic attribute or cross-tabulate later. For a likert question you’ll see the numeric rating; for multiple choice, the selected option; for open-ended, a free-text response in the persona’s voice.

Aggregate distributions

For likert questions, you get the mean score across the group and the full distribution — how many personas chose each rating value. For multiple-choice questions, you get the percentage of personas who selected each option. For open-ended questions, Boses extracts the most common themes and includes representative quotes.

Executive summary

The aggregate result also includes an executive summary written by Boses — a concise narrative that connects the quantitative patterns to strategic implications. If your likert scores show weak agreement on a specific statement, the summary will flag it; if one multiple-choice option dominates, it will draw the implication. Example aggregate snippet:
{
  "aggregate": {
    "questions": [
      {
        "question": "How likely are you to purchase this product at 45 PHP? (1–5)",
        "type": "likert",
        "mean_score": 3.4,
        "distribution": { "1": 10, "2": 15, "3": 25, "4": 30, "5": 20 }
      },
      {
        "question": "Which flavour would you choose first?",
        "type": "multiple_choice",
        "distribution": {
          "Classic Salted": 22,
          "Chili Lime": 48,
          "BBQ": 18,
          "Sour Cream": 12
        }
      }
    ],
    "executive_summary": "Purchase intent at 45 PHP is moderate (mean 3.4/5), with strong skew toward the top two boxes among respondents aged 22–30. Chili Lime commands a clear plurality, suggesting it should anchor the launch SKU. Consider A/B testing a 39 PHP entry price to push the top-2-box score above 50%."
  }
}

Tips

Mix question types in a single survey. A few likert questions establish measurable benchmarks; open-ended questions capture the reasoning behind the scores. Together they give you both the number and the story.
If your survey has many questions, upload a .docx file with clear formatting — numbered questions, options as bullet points, and scale labels spelled out. This produces the most reliable parsed schema.