Prompting for Performance Part II - Crafting High-Impact Prompts

Check out other posts in the series here: Part I and Part III
In our last blog post, we introduced the importance of prompting within AI tools to generate the most useful and accurate responses for financial research and diligence. In this post, we’ll explore how investment analysts can craft prompts that consistently yield reliable, insightful results. Effective prompt engineering combines domain knowledge with clarity of expression. Below, we outline several best practices to help you get the most out of an AI-powered research and diligence platform:
Be Clear and Specific
Ambiguous prompts are the enemy of clear and useful answers. In your prompt, clearly state what you’re looking for and give context for the ask. For example, instead of saying “Give me an analysis of ABC Corp”, provide specificity: “Analyze ABC Corp’s debt levels and interest coverage based on its 2024 financial statements.” Clarity and detail help LLMs focus on the relevant information, leading to more accurate and relevant responses.
Provide Context or Data
Frame the prompt with necessary background information. For example, if you want to assess a company’s competitive position, you might preface your question with a brief description of the company’s industry or recent events (“ABC Corp is a mid-sized semiconductor manufacturer. Based on recent market share data, what competitive advantages or threats does it face?”). Supplying context or even snippets of source text (like a paragraph from a financial report) can greatly improve the relevance of the AI’s answer. In one study from the SOA Research Institute, prompt engineers emphasized including contextual details and guidance to adapt an LLM’s thinking to problems encountered in investment research, rather than broad-stroke problem solving or logical reasoning.
Define the Output Format
If you need the information presented a certain way, say so directly in the prompt. You might ask for a bullet-point list of key findings, a SWOT analysis, or a short summary paragraph. Defining the desired format not only makes the answer more useful in your specific workflow, but it also forces the LLM to structure its response usefully (e.g., “List three potential red flags in the target company’s financials with brief explanations”). This level of instruction acts as a set of precise guardrails to optimize both the usefulness and accuracy of the answer.
Use Step-by-Step Reasoning for Complex Queries
For complicated analyses, it can help to prompt the AI to reason through the problem step by step. For instance, a strategic question around cash flows might be broken up in the following logical way: “Analyze the company’s cash flow trends. First, list the year-over-year changes in free cash flow for the last 5 years, then evaluate whether the trend is sustainable and why.” Guiding the model through a pre-defined sequence based on the user’s domain expertise can dramatically improve accuracy. Breaking down the prompt (or explicitly requesting chain-of-thought, a technical term that describes this process) encourages the model to be thorough and systematic.
Iterate and Refine
Treat interacting with an AI as an iterative dialogue. If the first answer is too general or misses the mark, follow up with a refined prompt or a clarifying question. Directness is extremely helpful. For example, a follow-up to a vague answer might be: “That overview was too broad. Can you focus specifically on ABC Corp’s customer concentration risk based on its client list?” Each iteration can build on the last, zeroing in on the details that matter. This iterative approach mirrors how a manager might press a more junior analyst for more detail. Maintaining a conversation with the AI tool – asking follow-ups, requesting examples, or rephrasing the question – is a powerful way to drill down into complex due diligence topics.
Instruct for Assumptions or Sources (When Possible)
For critical or more complex research, better prompting may mean asking the AI to reveal its assumptions or cite data points. For example, a prompt such as “Explain your reasoning and mention which facts from the report support your conclusion” helps verify information and also brings transparency to the AI’s thought process. A purpose-built tool for finance like Brightwave includes sentence-level attribution as a native feature of the platform, making source verification easy and accurate.
By applying these techniques, even professionals with limited exposure to AI tools can generate high-quality research. Prompt engineering isn’t about tricking the model; it’s about communicating analytical intent clearly and unambiguously. Like many aspects of traditional financial research or modeling, crafting the text inputs that guide the behavior of LLMs is both an art and science – a new literacy for the modern analyst. With practice, writing effective prompts becomes second nature and can dramatically reduce the manual workload of research and diligence workflows.