Generate Multiple Choice Questions from Any Text with AI

Tested prompts for multiple choice quiz generator from text compared across 5 leading AI models.

You have a block of text, a document, a chapter, or a transcript, and you need multiple choice questions out of it fast. Maybe you're a teacher building a quiz, a trainer creating an assessment, a student testing your own retention, or a content creator building an interactive experience. Either way, you're not looking for a lecture on AI, you're looking for a tool that works right now.

The core problem is that writing good multiple choice questions by hand is slow and cognitively demanding. You need a stem, a correct answer, and three or four plausible distractors that aren't obviously wrong. Do that for 20 questions and you've spent an hour on formatting instead of teaching.

This page shows you exactly how to use an AI prompt to turn any source text into publication-ready multiple choice questions in under a minute. You'll see the prompt, four real model outputs, a comparison of how each model handled the same input, and the tips that separate usable questions from ones you'll have to rewrite anyway.

When to use this

This approach works best when you have a defined body of source text and need questions grounded in that specific content. If the answers should come from the text itself rather than general knowledge, AI generation from text is faster, more accurate, and more consistent than writing from scratch.

  • A teacher converting a textbook chapter or lecture notes into a reading quiz
  • An L&D professional turning a compliance document or policy PDF into an employee assessment
  • A student creating self-test questions from class notes or a study guide
  • A course creator building knowledge checks between video lessons using the lesson transcript
  • A certification prep publisher generating practice questions from an official exam syllabus

When this format breaks down

  • When you need questions that test applied judgment or situational reasoning beyond what the text explicitly states, AI will generate surface-recall questions and miss the deeper objective.
  • When your source text is under 100 words, there is not enough content density to produce multiple distinct questions with strong distractors without the model inventing content.
  • When the subject matter is highly technical and errors in distractors could cause real harm, such as medical dosing or legal interpretation. Every question needs expert review before use.
  • When you need questions mapped to specific Bloom's Taxonomy levels or aligned to named learning standards, a raw generate-from-text prompt will not reliably hit those targets without heavy prompt engineering.

The prompt we tested

You are an expert quiz designer. Generate a multiple choice quiz based on the source text provided below.

Rules and formatting:
Generate 5 multiple choice questions, each with exactly 4 plausible options (A-D) where only one is correct and the distractors are believable but clearly wrong on close reading. Questions must be answerable solely from the provided text, vary in difficulty (2 recall, 2 comprehension, 1 inference), and avoid trick wording, 'all of the above', or 'none of the above'. Output each question in this exact format: 'Q1. <question>\nA) ...\nB) ...\nC) ...\nD) ...\nAnswer: <letter>\nExplanation: <one sentence>' with a blank line between questions.

Source text:
The Amazon rainforest, often called the 'lungs of the Earth,' produces roughly 20% of the world's oxygen and spans nine countries in South America, with about 60% located in Brazil. It is home to an estimated 10% of all known species on the planet. Deforestation, driven largely by cattle ranching and soy farming, has reduced the forest's total area by approximately 17% since 1970.

Produce the quiz now. For each question include: the question stem, four answer options labeled A-D, the correct answer, and a one-sentence explanation citing the source text.

Model comparison

ModelQualityCostLatencyTokens
Claude Opus 4.7Not run yet
Claude Haiku 4.5Not run yet
GPT-5.4Not run yet
Gemini 2.5 Pro $0.0029 17.2s 298 / 514
Grok 4.1 Fast Reasoning $0.0003 19.3s 437 / 342

Outputs

Gemini 2.5 Pro
Q1. According to the text, what percentage of the Amazon rainforest is located within Brazil?
A) Roughly 20%
B) Approximately 17%
C) An estimated 10%
D) About 60%
Answer: D
Explanation: The source text explicitly states that "about 60% [of the Amazon rainforest is] located in Brazil."

Q2. The source text identifies which activities as the primary drivers of deforestation in the Amazon?
A) Urban expansion and road construction
B) Logging and paper production
C) Cattle ranching and soy farming
D) Mining and oil exploration
Answer: C
Explanation: The text specifies that deforestation is "driven largely by cattle ranching and soy farming."

Q3. Why is the nickname 'lungs of the Earth' appropriate for the Amazon rainforest, based on the information provided?
A) Because it is home to 10% of all known species.
B) Because it spans nine different countries in South America.
C) Because it is a major producer of the world's oxygen.
D) Because its total area has been significantly reduced.
Answer: C
Explanation: The text supports this nickname by mentioning that the rainforest "produces roughly 20% of the world's oxygen."

Q4. What does the text state about the Amazon rainforest's contribution to global biodiversity?
A) Its biodiversity has been reduced by approximately 17% since 1970.
B) It contains a significant fraction of the world's known species.
C) It has more species than any other habitat on the planet.
D) It is home to 60% of all species in South America.
Answer: B
Explanation: The passage states the Amazon "is home to an estimated 10% of all known species on the planet."

Q5. Based on the text, what can be inferred about the impact of ongoing deforestation?
A) It likely threatens both global oxygen levels and a vast number of species.
B) The rate of deforestation has slowed significantly since 1970.
C) Brazil is the only country responsible for the reduction in the forest's area.
D) Soy farming has caused more forest loss than cattle ranching.
Answer: A
Explanation: Since the text establishes the Amazon's importance for oxygen production and biodiversity, one can infer that its reduction would negatively affect both.
Grok 4.1 Fast Reasoning
Q1. What is the Amazon rainforest often called?
A) Heart of the Earth
B) Lungs of the Earth
C) Breath of South America
D) Oxygen Factory of the Planet

Answer: B
Explanation: The source text states that the Amazon rainforest is often called the 'lungs of the Earth.'

Q2. Approximately what percentage of the world's oxygen does the Amazon rainforest produce?
A) 10%
B) 20%
C) 25%
D) 30%

Answer: B
Explanation: The source text indicates that the Amazon rainforest produces roughly 20% of the world's oxygen.

Q3. In which country is about 60% of the Amazon rainforest located?
A) Colombia
B) Peru
C) Venezuela
D) Brazil

Answer: D
Explanation: The source text specifies that the Amazon spans nine countries with about 60% located in Brazil.

Q4. According to the source text, what has primarily driven deforestation in the Amazon rainforest?
A) Mining and logging
B) Urban expansion and tourism
C) Cattle ranching and soy farming
D) Hydropower dams and road construction

Answer: C
Explanation: The source text states that deforestation is driven largely by cattle ranching and soy farming.

Q5. Based on the source text, which country can be inferred to contain the majority of the Amazon rainforest and thus be most impacted by its deforestation?
A) Peru
B) Colombia
C) Bolivia
D) Brazil

Answer: D
Explanation: The source text notes that the Amazon spans nine countries with about 60% located in Brazil, implying Brazil holds the largest share and faces the greatest deforestation impact.

What makes these work

  1. 01
    Paste the exact source text

    Do not summarize or paraphrase your source material before sending it to the model. Paste the raw text. Summaries strip out the specific facts, dates, names, and numbers that make strong distractors possible. The more precise your input, the more precise and defensible the questions.

  2. 02
    Specify number and difficulty

    Tell the model how many questions you want and at what level. 'Generate 5 recall-level questions' produces very different output than 'Generate 5 application-level questions.' Without this instruction, most models default to straightforward recall, which may be too easy for your audience.

  3. 03
    Ask for distractor rationale

    Adding 'explain why each wrong answer is plausible' to your prompt forces the model to construct distractors that test real comprehension rather than padding with obviously false options. This single instruction significantly reduces the rewrite rate for generated questions.

  4. 04
    Request answer keys separately

    If you are exporting questions to a quiz platform or sharing with students, ask the model to output two formats: one with answers marked and one without. This saves a formatting step and prevents accidental answer leakage when copying the student-facing version.

More example scenarios

#01 · High school biology teacher quizzing a textbook passage
Input
Photosynthesis occurs in the chloroplasts of plant cells. During the light-dependent reactions, sunlight is absorbed by chlorophyll and used to split water molecules, releasing oxygen as a byproduct and producing ATP and NADPH. These energy carriers then power the Calvin cycle, where carbon dioxide is fixed into glucose.
Expected output
Q: What is the primary byproduct released when water molecules are split during the light-dependent reactions?
A) Carbon dioxide
B) Glucose
C) Oxygen (correct)
D) ATP

Q: Which molecules produced in the light-dependent reactions power the Calvin cycle?
A) Oxygen and water
B) ATP and NADPH (correct)
C) Glucose and CO2
D) Chlorophyll and sunlight
#02 · HR team converting a remote work policy into a compliance quiz
Input
Employees working remotely must use a company-approved VPN at all times when accessing internal systems. Personal devices may only be used if enrolled in the company MDM program. Any data breach or suspected breach must be reported to the IT Security team within two hours of discovery, regardless of the time of day.
Expected output
Q: When must a remote employee report a suspected data breach to IT Security?
A) By the end of the business day
B) Within 24 hours
C) Within two hours of discovery (correct)
D) Only during business hours

Q: Under what condition may an employee use a personal device for work?
A) With manager approval
B) If enrolled in the company MDM program (correct)
C) For read-only access only
D) Never under any circumstances
#03 · Medical education platform generating USMLE-style questions from a pharmacology summary
Input
Beta-blockers competitively inhibit catecholamines at beta-adrenergic receptors, reducing heart rate and myocardial contractility. They are first-line for chronic stable angina and post-MI management. Common side effects include bradycardia, fatigue, and bronchospasm. They are contraindicated in patients with asthma or decompensated heart failure.
Expected output
Q: A patient with chronic stable angina and a history of asthma is being evaluated for anti-anginal therapy. Which drug class is contraindicated?
A) Calcium channel blockers
B) Nitrates
C) Beta-blockers (correct)
D) Ranolazine

Distractor rationale: The other options are not contraindicated in asthma. This question applies, not just recalls, the source content.
#04 · Corporate trainer building a quiz from a customer service training transcript
Input
When a customer calls to dispute a charge, the representative should first acknowledge the frustration without admitting fault. Then pull up the account and verify the transaction details before offering any resolution. Never promise a refund before checking the refund eligibility window, which is 30 days from purchase.
Expected output
Q: What should a representative do before offering a resolution to a disputed charge?
A) Immediately escalate to a supervisor
B) Promise a refund to de-escalate the call
C) Pull up the account and verify the transaction details (correct)
D) Ask the customer to call back during business hours

Q: What is the refund eligibility window according to company policy?
A) 14 days
B) 60 days
C) 90 days
D) 30 days (correct)
#05 · Self-studying for a history exam using class notes
Input
The Treaty of Versailles, signed in June 1919, formally ended World War I. It held Germany responsible for the war under the War Guilt Clause, imposed reparations of 132 billion gold marks, and stripped Germany of its overseas colonies and significant European territory including Alsace-Lorraine, which was returned to France.
Expected output
Q: Which territory was returned to France under the Treaty of Versailles?
A) The Rhineland
B) The Sudetenland
C) Alsace-Lorraine (correct)
D) Bavaria

Q: What did the War Guilt Clause specifically establish?
A) The amount of reparations Germany owed
B) Germany's responsibility for causing the war (correct)
C) The timeline for German disarmament
D) The transfer of colonial territories

Common mistakes to avoid

  • Sending too little text

    When the source passage is too short or vague, the model fills gaps with general knowledge unrelated to your content. Questions drift away from what you actually want tested. Use at least a full paragraph of substantive content per two or three questions you want generated.

  • Accepting the first output without reviewing

    AI-generated distractors occasionally contain factual errors or overlap in meaning with the correct answer, which invalidates the question. Always read each question and verify the correct answer against the source text before using questions in a graded or high-stakes context.

  • Forgetting to set the audience level

    A question appropriate for a graduate seminar is not appropriate for a fifth-grade reading quiz, and vice versa. If you do not specify your audience, the model picks a default that may mismatch your learners and produce questions that are either too dense or too obvious.

  • Using the same passage for every question

    If all 20 questions come from a single dense paragraph, questions start testing the same facts repeatedly. Break long documents into logical sections and generate a set of questions per section. This distributes coverage and produces a more valid assessment.

  • No instruction on question format

    Without specifying four options, the model may generate three, or use True/False framing, or add sub-questions. State your format explicitly: 'four answer choices labeled A through D, one correct answer.' Consistent format matters when you are importing into a quiz platform.

Related queries

Frequently asked questions

Can I generate multiple choice questions from a PDF?

Yes, but you need to extract the text first. Copy and paste from the PDF directly, or use a tool like Adobe Acrobat or a free PDF-to-text converter. Once you have plain text, paste it into the prompt. Most AI models do not accept PDF file uploads in their default interfaces, though some platforms with file upload support can handle it natively.

How many questions can I generate from one passage?

A practical rule is one to two solid questions per 50 to 100 words of source text. Push beyond that and the model starts repeating concepts or generating questions with no basis in the text. For a 500-word article, aim for five to eight questions. For longer documents, chunk the text and run the prompt in sections.

Which AI model is best for generating multiple choice questions?

GPT-4 and Claude produce the most consistently accurate distractors and handle nuanced subject matter better than smaller models. Gemini performs well on structured factual content. For most everyday quiz generation tasks, any current frontier model works if your prompt is specific. The comparison table on this page shows exactly how four leading models handled the same input.

How do I make sure the questions are not too easy?

Add a difficulty instruction to your prompt: 'make distractors plausible by using terms from the same domain' or 'write application-level questions that require understanding, not just memorization.' Avoid letting the model use the exact wording from the source text as the correct answer, since that turns the question into a scan-and-match exercise instead of a comprehension check.

Can I use AI-generated quiz questions commercially?

Generally yes, but verify the terms of service for the specific AI platform you are using. The questions themselves, as AI output, typically do not carry copyright in most jurisdictions as of current legal guidance. However, if your source text is copyrighted material you do not own, using that text to generate commercial assessments may raise separate copyright concerns around the source content.

How do I export the generated questions to Google Forms or a quiz platform?

Ask the model to format the output as a numbered list with a clear answer key at the end, or request CSV-style formatting if your platform supports import. For Google Forms, you currently need to paste questions manually or use a third-party Forms add-on that accepts structured text input. Platforms like Quizizz and Kahoot have import templates that a well-structured AI output can map onto directly.