Bolt App Reader Agent is a premium feature subject to Element451's usage-based pricing.
Overview
The Bolt App Reader Agent is a powerful tool built into the Decisions module that helps streamline your application review process. It acts as the "first reader" of an application, analyzing submitted materials and scoring the decision based on your defined criteria and AI instructions. This ensures a fast, consistent, and objective first review, allowing your admissions team to focus on engaging with applicants rather than getting bogged down in manual evaluations.
Enabling + Setting Up the Agent
Before the Bolt App Reader Agent can evaluate applications, you must enable the feature and configure its settings. This section walks you through designating which Decision Board stages will use AI evaluation and creating the AI instructions that guide how criteria are scored.
The Bolt App Reader Agent is disabled by default and must be enabled by an administrator in Billing settings.
Configuring Decision Stages
Configuring Decision Stages
You can enable the agent for specific stages in your Decision Board. You can enable Bolt App Reader Agent for as many stages as you’d like. However, we recommend configuring your Decision Board so the agent reviews each application only once. Credits apply each time the agent runs.
Go to Applications > Decisions > Decision Settings.
Click the Board tab.
Find the stage where you want to enable the reader agent.
Click the three horizontal dots icon in the top-right corner of the stage card.
Select Enable AI App Reader.
An orange lightning bolt icon will appear on enabled stage cards.
Adding AI Instructions
Adding AI Instructions
For the agent to evaluate a criterion, you must provide AI instructions:
Go to Applications > Decisions > Decision Settings.
Click the Criteria tab.
Find and edit the criteria you want the agent to evaluate.
Go to the General tab in the Edit Criteria side panel.
Locate AI Instructions and enter specific guidance. Be clear and specific with your AI Instruction. Think of them as directions you'd give to a human reviewer to ensure consistency in scoring.
Repeat for each criteria item as needed.
The Bolt App Reader Agent will not score a criterion if AI instructions are not provided.
Example of AI Instructions
Example of AI Instructions
For a criterion like "Academic Performance," you might provide instructions like:
Review the applicant's GPA and transcripts and compare it to the institution's average admitted GPA range of 3.2. Consider the difficulty of coursework (AP, IB, honors, or advanced-level classes). Consider any trends in academic performance from the transcript (e.g., improvement over time or consistency). Scoring Guidelines: - For students with a GPA of 3.5 or higher with at least 3 AP or Honors classes: score 7-10 - Students with no AP or Honors courses: score no higher than 5 - Students with GPA 2.5-2.99: score 2-3 - Students with below 2.0 overall GPA: score 1 regardless of AP or Honors mix
This detailed guidance helps the agent evaluate applications consistently with your institution's specific admissions priorities.
The Evaluation + Scoring Process
Once configured, the Bolt App Reader Agent evaluates applications using a systematic approach. This section explains how the agent analyzes application materials, what settings it uses from your criteria configuration, how it handles documents and cross-checks information, and the categorical scoring system it uses to classify applications.
How Applications Are Scored
How Applications Are Scored
When an application reaches a stage where the Bolt App Reader Agent is enabled, it automatically activates and begins its evaluation process.
The agent reviews the application and all associated documents.
It evaluates each criterion for which you've provided AI instructions.
It assigns numerical scores to each criterion based on your max score settings.
It determines an overall categorical rating for the application.
Unlike human reviewers, the Bolt App Reader Agent's scores do not contribute to the application's overall calculated score. This means:
When only the agent has evaluated a decision, the overall score will still show 0.
Settings like aggregate weight, score type, etc., do not apply to the agent's evaluation.
The agent provides a categorical overall assessment instead (Highly Qualified, Qualified, etc.).
Document Analysis + Cross-Checks
Document Analysis + Cross-Checks
As part of the evaluation process, the agent:
Identifies and analyzes relevant application documents for each criterion (transcripts, essays, resumes, etc.).
The agent performs cross-reference checks to verify consistency across documents and flags inconsistencies that might impact scoring (like mismatched personal information). Where significant inconsistencies are detected, the agent will skip evaluating that criterion.
The agent's thoroughness helps catch discrepancies that might be missed in a manual review, ensuring that scores are based on reliable information.
Scoring System + Decision Categories
Scoring System + Decision Categories
The Bolt App Reader Agent classifies applications into five categories:
Highly Qualified
Qualified
Neutral
Unqualified
Highly Unqualified
These categories provide a quick assessment of an application's overall strength based on your institution's specific criteria and standards.
Viewing & Using AI Analysis
The agent's evaluation results are integrated throughout the Decisions module interface. This section shows you where to find AI analysis results, how to interpret the different components of the analysis sidebar, and how to use the interactive document features to verify and understand the agent's assessment.
Access Points
Access Points
Here are the methods by which you can access the Bolt App Reader Agent's analysis:
All Decisions Table: Quickly scan the Bolt Analysis column for a high-level view of the agent’s score. To explore full analysis details, open the decision and use one of the other methods listed below (2–3).
Decision Header: Ideal for quickly reviewing the agent’s evaluation without leaving your current view. Click the AI Analysis chip in the decision header to open a side panel with the agent’s full reasoning and criterion-level scores.
Human Review View (Criteria Tab or Application Reader): Use this method when scoring an application or reviewing human scores. The agent’s analysis appears alongside human input, offering numerical scores and reasoning for each criterion.
Criteria Tab: Use this when scoring an application yourself or reviewing human scores. The agent’s numerical score and reasoning for each criterion appear under the “Bolt Application Reader Score” section.
Application Reader Interface: Use this view when you want a snapshot of the agent’s evaluation alongside other application information. Shows the agent’s overall score and summary only.
Understanding the AI Analysis Sidebar
Understanding the AI Analysis Sidebar
When you access the AI Analysis Sidebar, you'll find two tabs—review and reasoning and feedback.
Review Tab
Overall summary and categorical score at the top
Breakdown for each evaluation criterion with individual scores and summaries
Reasoning & Feedback Tab
This tab provides a comprehensive history of all evaluation activity, including initial assessments, re-runs, and feedback requests. It shows:
The complete evaluation flow and reasoning process.
To see the detailed process, expand the "Reasoning" header.
Explanation of which documents were read and analyzed.
Clickable document chips that open referenced materials for quick review.
The agent's opinion/evaluation summary for each criterion with assigned scores.
When re-runs are initiated:
If feedback was tied to a specific criterion, the agent only re-evaluates that criterion and updates its score and summary.
The final section always displays the most current evaluations from all previous runs, providing a comprehensive "final decision" view.
Document References + Viewing
Document References + Viewing
The Bolt App Reader Agent provides clickable document chips that let you quickly verify its assessment by seeing exactly what it used when scoring:
Clickable Document Chips: Documents referenced in the analysis appear as clickable chips.
Contextual Document Viewing: Clicking a chip opens the document in a side panel, keeping you in context.
Cross-Check Warnings: The agent clearly explains any inconsistencies it detected across documents.
Providing Feedback + Re-Running Evaluations
The Bolt App Reader Agent is designed to work collaboratively with your admissions team. This section covers how to enhance the AI evaluation with your expertise through feedback, access the feedback feature, and initiate re-evaluations when application information changes or settings are updated.
Feedback-Based Re-Evaluation
Feedback-Based Re-Evaluation
The feedback feature lets you enhance the agent's evaluation with expertise and contextual knowledge.
Use feedback when:
You have additional information about an applicant not captured in their submitted documents.
Special circumstances or contexts should be considered in the evaluation.
Certain achievements or challenges need more weight in the assessment.
You want to ensure consistency with your institution's holistic review approach.
The agent may have missed nuanced elements in complex application materials.
Targeted feedback helps create a more comprehensive and accurate evaluation, combining AI efficiency and human insight.
Where + How to Provide Feedback
Where + How to Provide Feedback
Accessing Feedback Form
From the Decision Header: Click the AI Analysis chip to open the sidebar, then navigate to the Reasoning & Feedback tab.
From the Criteria Tab: When viewing a decision's criteria, each criterion has a feedback button that serves as a shortcut, opening the sidebar directly to the feedback area for that specific criterion.
Providing Feedback
Select from the dropdown which context you want to provide feedback for:
Choose a specific criterion to provide feedback on that item.
Select "General" to provide overall feedback that doesn't pertain to a specific criterion.
Add your feedback or additional context.
Click "Submit" to initiate the re-run.
Repeat steps 1-3 for each criterion you want to provide feedback on.
Manual Re-Runs
Manual Re-Runs
You may need to re-run an evaluation without providing feedback in several scenarios:
When application information has been updated.
After changing criteria settings (like max score values).
When you want a fresh evaluation.
To manually re-run an evaluation:
Re-running an evaluation uses additional credits.