Secret Agent-Man, AI Style
This week I’ve heard an echo in several conversations about how we’re all trying to fit more into a day. Makes me think of that song - “Bigger, better, faster, stronger” and that feels like an apt description for this AI wave we’re riding.
Which is why I want to take a moment to walk through agent creation, as this is a tool that I am finding very helpful in managing the flood of everything that needs to be responded to, triaged, and handled in a day.
Let’s start with a definition, an AI agent is a custom entity trained to a specific domain and given instructions on how to respond. For example, you could theoretically clone an agent based on a bunch of emails from a coworker and use it to draft responses that sound like that coworker if they're out for any reason. This week at work, I set up and am testing an RFP agent that is trained off of several completed RFPs we’ve submitted as well as product documentation. That’s a very technical application. So, going with another high-level example let’s say employees are often asking HR the same questions. An agent could be trained how to respond to those questions based and could handle a lot of that query load allowing your HR to focus their efforts elsewhere.
As I’m giving examples here, I’m realizing these are all very practical work based examples still, so let’s round out with one more call it - fun or creative application for an agent. I have a friend with a significant other who does theater. An agent could be trained with improv prompts and then could throw out acting exercises or ideas for someone theatrically inclined.
While your creativity is the only limit to what an agent can do, understanding how to set them up effectively is key. Let's dive into the practical steps of building your own AI agent.
Admittedly I made a mistake when trying to figure out how to create my first agent. I googled “how to make an AI agent” and got a quite complex response with very involved processes. I don’t say this to deter you. Quite the opposite, my hope is that we can all learn from that experience and let me reassure you agent creation isn’t hard, complicated, or anywhere near “rocket science” level. Phew!
The first thing you need to understand when creating an agent is what the labeling syntax in your AI system defines them as. In ChatGPT, these are GTPs. In Gemini, these are Gems, in CoPilot these are “declarative agents,” and in AWS, agents are called “agents.” I know, beautifully straightforward that one.
In between starting writing this post and continuing my draft, I caught up with a friend. Quick note on that because I can imagine it leads to a question some of you may have - yes, I still write the “old-fashioned way.” It's a choice and a skillset I believe is important to maintain. Plus, I look at writing like a craft and take a lot of pride in how I structure written syntax.
Which is not to say that AI shouldn’t be used for writing. It is actually wonderfully helpful in starting certain projects and relieving some of the mental rigor that comes with composing something where the words just feel a bit beyond you. But, AI is a tool and you get to choose when and how you leverage it. Which is actually a tidy segue back into our discussion on creating agents as one of the options available to us within AI.
The friend I caught up with is in recruiting and she spends a fair amount of time helping job-seekers improve their resumes. I mentioned to her that an agent might actually be very helpful with that. So, with that use case in mind, let's build an agent for resume review.
The first thing you’re going to want to create is a description for your agent. This should basically say what you are creating is aligned to do. This first description is just a label, a way if you have multiple agents to remind yourself how this one is aligned, or if your sharing agents to build transparency on what the intention behind a particular agent is. Most systems will also have a character limit with what is allotted here and this will vary from system to system. For example, in Gemini, Gems are just given a name and then instructions.
For our resume agent, the short description could be something like: “This AI agent is trained to review resumes and provide clear, actionable feedback to help job seekers improve their resumes.”
We’ll get more into the instructions and assigning and area of expertise in the next part, the agent instructions.
This is where you assign your agent its area(s) of expertise, perhaps a role to further refine, and give instructions for what expected outputs should be. Key areas you will want to make sure to address in a well-designed agent description include:
What this agent will do
Areas of Focus
A role if applicable or helpful
Style instructions with outputs
Anything the agent should avoid
Overall goal and objectives
So in keeping with our example of making a resume review agent, what should our instructions be exactly? Well, I actually don’t expect you to write these from scratch. Instead, my recommendation is to leverage AI to help you come up with good instructions for your agent. Here’s what your prompt might look like:
I’ve used our description and the outline of what a well-designed agent should include to formulate a prompt.
Here's what Gemini 2.5 flash gave me in response:
This document outlines the guidelines and operational parameters for an AI-powered resume feedback agent. The agent's primary purpose is to assist job seekers in enhancing their resumes by providing clear, actionable, and constructive criticism.
The resume agent will:
Receive a user-provided resume (or resume content).
If a resume is referenced in the prompt but not provided, the agent should ask for a copy of the resume being evaluated.
Analyze the resume against established best practices for various industries and job levels.
Identify areas for improvement across multiple categories.
Generate specific, actionable recommendations for each identified area.
Provide positive reinforcement for strengths found in the resume.
Offer an overall summary of the resume's effectiveness.
The agent should focus its review on the following key aspects of a resume:
Clarity and Conciseness:
Is the language clear, direct, and easy to understand?
Are there any redundancies, jargon, or unnecessary words?
Is the resume within an appropriate length for the user's experience level (e.g., 1 page for entry-level, 1-2 pages for experienced)?
Impact and Achievements (Quantification):
Does the resume highlight achievements rather than just responsibilities?
Are accomplishments quantified with numbers, percentages, or specific outcomes whenever possible?
Does it demonstrate the impact the individual had in previous roles?
Keywords and ATS Optimization:
Does the resume contain relevant keywords for the target job/industry?
Is it structured in a way that is easily parsable by Applicant Tracking Systems (ATS)? (e.g., standard headings, clear formatting).
Formatting and Readability:
Is the layout clean, professional, and easy to read?
Is there consistent formatting (fonts, spacing, bullet points)?
Are sections clearly delineated?
Is there appropriate white space?
Grammar, Spelling, and Punctuation:
Is the resume free of grammatical errors, typos, and punctuation mistakes?
Targeting and Relevance:
Does the resume clearly align with the type of role the job seeker is pursuing?
Are experiences and skills presented in a way that is relevant to the target position? (If a target role is provided by the user).
The agent should adopt the role of a "Professional Career Coach" or "Recruitment Specialist Advisor." This role implies:
Supportive and Encouraging: Maintain a positive and helpful tone.
Knowledgeable and Authoritative: Provide advice based on industry best practices.
Action-Oriented: Focus on practical steps the user can take.
The agent's output should adhere to the following style guidelines:
Tone: Professional, constructive, encouraging, and clear.
Language: Use simple, direct language. Avoid overly academic or overly casual phrasing.
Structure:
Start with a brief, positive opening acknowledging receipt of the resume.
Provide an "Overall Summary" section.
Break down feedback into distinct sections corresponding to the "Areas of Focus" (e.g., "Impact & Achievements," "Formatting & Readability").
Within each section, use bullet points for specific recommendations.
For each recommendation, clearly state the issue and then provide a concrete suggestion for improvement.
Conclude with an encouraging closing statement.
Examples of Output Format:
**Overall Summary:** Your resume has a strong foundation, clearly outlining your experience. To make it even more impactful, focus on quantifying your achievements and tailoring your language to specific job descriptions. **Clarity and Conciseness:** * **Issue:** Some sentences are lengthy and contain redundant phrases. * **Suggestion:** Review sentences for conciseness. For example, "responsible for the oversight and management of" could become "managed." * **Issue:** Use of passive voice in some descriptions. * **Suggestion:** Rephrase sentences to use active voice (e.g., "Managed a team of 5" instead of "A team of 5 was managed by me"). **Impact and Achievements:** * **Issue:** Many bullet points describe responsibilities rather than accomplishments. * **Suggestion:** For each role, identify 2-3 key achievements. Start bullet points with strong action verbs and quantify results. Example: Instead of "Managed social media accounts," try "Grew social media engagement by 30% in 6 months, leading to a 15% increase in web traffic." * **Issue:** Lack of numerical data to support claims. * **Suggestion:** Wherever possible, add numbers, percentages, or dollar figures to illustrate the scale and impact of your work. **Formatting and Readability:** * **Issue:** Inconsistent use of bolding for job titles and company names. * **Suggestion:** Maintain a consistent formatting style throughout the entire document for all headings, titles, and dates.
The agent should strictly avoid:
Making direct edits to the resume: Provide feedback, not modifications.
Subjective opinions or personal biases: Stick to objective, best-practice-based feedback.
Generic, vague, or unhelpful advice: All feedback must be actionable.
Negative or discouraging language: Maintain a constructive and supportive tone.
Asking for personal information beyond the resume content.
Providing legal or immigration advice.
Guaranteeing job placement or interview success.
Commenting on the user's personal details (e.g., age, gender, photo, marital status) unless explicitly part of the resume content and relevant to professional context (e.g., a portfolio link).
Overall Goal: To empower job seekers with the insights and tools necessary to create highly effective resumes that significantly increase their chances of securing interviews.
Objectives:
To provide comprehensive feedback covering all critical aspects of resume quality.
To deliver feedback that is consistently clear, actionable, and easy for users to implement.
To foster user confidence by highlighting strengths and offering constructive paths for improvement.
To contribute to a higher success rate for job seekers in their application processes.
And what I love about this is Gemini even gave us an example of output format! Now our job is to read through and refine the instructions for the agent we are creating.
Since we used Gemini for our last example, let’s switch over to ChatGPT just so we get a bit of good cross-platform representation with this example. It is worth noting, you can use ChatGPT to “prompt-start” your agent creation. And I would encourage you to try this tool. But I also want to make sure you understand the core fundamentals of creating a well-designed agent through this post.
Many agents will have the option for you to create and provide typical prompts that might be used in engaging with your created agent. For a resume agent, this might be as simple as “what can be improved in the attached resume.” Or, can be more speculative such as “give me suggestions to align this resume with a technical role.” Given how many resumes are process by AI, I would also suggest “Please review my resume for overall effectiveness, focusing on clarity, impact, and ATS optimization” as a relevant example that may frequently be used. Conversation starters are optional, not required, and only affect the output of your agent as much as serving as a way to open the conversation if used.
If you have external resources or references that may be helpful in educating your agent, this is where you attach them. For example, with the RFP agent I mentioned earlier, I used RFPs that had been completed as well as product documentation as knowledge resources to ensure that agent has good fundamental knowledge to leverage in response to my prompts. For this resume reviewer agent, we’ve outlined some clear instructions but what may also be helpful are some resources on what makes a good resume and trends in the employment market that may need to be considered.
Since we’re building our agent in ChatGPT, it is relevant to mention that the Pro plan currently limits knowledge resources to 20 files. Some agents also allow you to list links which can be helpful in providing resources that may be subject to change. For our example, I’ve made PDFs of a couple web guides on how to write a good resume and have uploaded those to inform our agent.
There are pros and cons to different models in terms of both output (thoroughness, level of detail, depth of research) and how resource-heavy that model may be to run, aka the compute power to respond to a prompt. You want to always look for the balance between the two. You won’t need a research paper level response to everything you look for, something which typically requires more compute power. But you also don’t want such a light response that what you get in return is less detail than what is helpful. If you’re unsure what to set this as, you can always make it an option the user can adjust as needed. However, if you have strong opinions on GPT 4 versus o4mini, this is where you can elect to make that statement. A discussion on a comparison of AIs is something I plan to add to this blog soon. In the meantime, I would encourage those interested in the nuances of model comparisons to spend some time familiarizing themselves with the Vellum reports: https://www.vellum.ai/llm-leaderboard
This is where you can say if your agent should be allowed to use web-sources or if you want a closed model restricted to the resources you provide. In ChatGPT, “Canvas” refers to coding capabilities, image generation is if you want your agent to be able to create pictures for you, and code interpreter is something you should enable if you are using spreadsheets in your knowledge resources so that your agent will be able to read through and digest that type of tabular information.
With all those areas filled out with good and thorough detail, we are now ready to create and test our agent. Yes, test!
Test, test, test, and then test some more!
This is the most important step. You need to actually confirm that what you’ve created works in the way you intend.
You’ll see your custom agent under “My GPTs” (upper right hand of the screen when you click on the GPTs option from the left navigational menu). Once you click into your agent, you can use one of the starter prompts you’ve provided or create a custom prompt and check the output.
With my first test I clicked the “what would you improve in the attached resume” and expected ChatGPT to ask me to provide my resume. Instead, it reviewed one of the pdfs I attached. When I attached my resume in my next prompt though, it understood that that resume was what I wanted it to review.
This is a great example of a decision that might come up in refining a created agent. With this example I could either change the conversation starter or I could get rid of the resource. Since I feel like our agent instructions on this example are pretty solid, I’m going to get rid of the resources and retest. This of course will be different depending on your specific scenario and is a judgement call to be weighed with consideration.
Look for the pencil icon next to your GPT to edit it, make any changes, and then update.
After removing the knowledge resources, when I tried another of the conversation starters, I got generic tips on how to build a strong resume. Though this is helpful, this is not what I want. So, my decision at this point is to go back into my agent and make another adjustment, adding to the instructions that if a resume is referenced in the prompt and not provided, that my agent should ask for a copy of the resume being evaluated.
And though I am slightly annoyed it isn’t working quite as expected yet I am thrilled that this example illustrates so beautifully the importance of testing.
With that adjustment, I get the desired output.
And then following up with the a relevant file yields helpful, specific, actionable and professional feedback.
Though this is enough for the purposes of this post, it is not the extent of recommended testing.
Do more testing.
As you use your agent, you should give it feedback via thumbs up and thumbs down on what output is working and what output isn’t. You may also want to respond with instructions. For example, with the RFP agent I had to “teach it” not to make inferences based on what would be reasonable to expect with the technical documentation I provided and to “limit its responses to only what it could find in the documentation specifically.”
Which gets us into the conversational aspect that is so important when working with AI. Natural language is, in this way, to our advantage. And this is important strategy to remember and use when the goal is to get good quality outputs. If you don’t get the response you want or expect with your first go, try to refine what you’re asking for with a follow-up - and you might find yourself pleasantly surprised.
Now go make yourself an agent.