The Alex Formula for Prompt Engineering: A Comprehensive Guide

Dr. Ernesto Lee
5 min readMar 15, 2024

In an age where AI’s capabilities are being harnessed more than ever, understanding the intricacies of prompt engineering is crucial. This guide is inspired by a formula I learned from Alex, an expert in the field, which perfectly blends technicality with creativity. Let’s delve into what prompt engineering is, its value, and explore the Alex Formula in detail, followed by some robust examples.

What is Prompt Engineering?

Prompt engineering is the skill of crafting inputs that direct AI models, particularly in natural language processing, to produce desired outcomes. It’s the linchpin in ensuring that AI responses are not just accurate but also contextually relevant, making it a cornerstone in fields like software testing and cybersecurity.

The Importance of Prompt Engineering

Prompt engineering is essential for:

  1. Precision in Responses: Ensures AI’s outputs are directly relevant to the query.
  2. Efficiency in AI Interactions: Reduces the time and effort spent in rephrasing or clarifying queries.
  3. Customization: Tailors AI outputs to fit specific needs or contexts.

The Alex Formula: Breaking it Down

The Alex Formula is a structured approach to crafting AI prompts:

  • Task (=): Clearly define the action or response expected from the AI.
  • Context (=): Provide background information to frame the task within a relevant scenario.
  • Additional Context (=): Add more specifics to refine the AI’s response further.
  • Temperature (=): Set the AI’s creativity level, determining how varied or unexpected the responses can be.
  • Voice (=): Choose the style or persona for the AI’s response.
  • Tone (=): Decide the emotional character or attitude of the response.

Applying the Alex Formula: Detailed Examples

Let’s apply this formula to create powerful and verbose prompts:

Example 1: Vulnerability Assessment

  • Task (=): Conduct a comprehensive vulnerability assessment of a new cloud-based storage solution.
  • Context (=): The solution stores sensitive customer data and has recently integrated a third-party payment system.
  • Additional Context (=): Focus on potential risks in data encryption and transaction security.
  • Temperature (=): 0.7, for a mix of creative and analytical responses.
  • Voice (=): A cybersecurity expert with a focus on cloud storage.
  • Tone (=): 50% technical, 25% innovative, 25% factual.

Example 2: Penetration Testing

  • Task (=): Develop an in-depth penetration testing strategy for a corporate network with remote access capabilities.
  • Context (=): The network has recently adopted a BYOD (Bring Your Own Device) policy.
  • Additional Context (=): Emphasize testing for remote access vulnerabilities and BYOD-related security challenges.
  • Temperature (=): 0.7, balancing technical detail with strategic insight.
  • Voice (=): A seasoned penetration tester with expertise in BYOD environments.
  • Tone (=): 50% analytical, 25% exploratory, 25% factual.

Example 3: Compliance Strategy

  • Task (=): Formulate a compliance strategy that aligns with the latest GDPR and HIPAA standards.
  • Context (=): The strategy is for a healthcare app that has recently expanded its services to the EU.
  • Additional Context (=): Prioritize user data consent mechanisms and secure patient data handling.
  • Temperature (=): 0.7, for innovative yet compliant solutions.
  • Voice (=): A compliance officer with international experience.
  • Tone (=): 50% authoritative, 25% advisory, 25% factual.

Concluding Thoughts

The Alex Formula for prompt engineering provides a comprehensive framework to leverage AI effectively, especially in areas requiring high levels of precision and creativity. By meticulously crafting each component of a prompt, we can guide AI to produce not just any response, but the right response, tailored to specific needs. This approach is invaluable in fields like cybersecurity, where the difference between a good and a great AI interaction can have significant implications. As we continue to push the boundaries of AI’s capabilities, mastering prompt engineering will be key to unlocking its full potential.

Lab

Exercise: Comparing Prompt Responses for Better Test Automation Practices

The following tips and exercises are designed to guide Test Automation Engineers in crafting effective prompts to enhance automated testing. Each tip includes a practical exercise to be performed in your test automation environment. These exercises will help you compare different prompts and their effectiveness in producing desired outcomes.

Tip: Include Detailed Information. For better test automation, it’s crucial to provide detailed information in your prompts. Clearly state your goal, provide relevant context, and specify the kind of response or action you expect. Also, mention specific sources of information, such as log files or test reports, that should be used.

Exercise: Compare the results of these two prompts in your test automation environment. Observe which prompt yields more comprehensive and relevant test scripts.

Prompt 1: Write a test script for verifying the login functionality.

Prompt 2: Develop a detailed test script to verify the login functionality for a web application. The script should cover scenarios including valid credentials, invalid credentials, and empty fields. Ensure the script references the latest UI element identifiers from the source code repository and includes assert statements for all verifications.

Tip: Structure Your Prompts Strategically. The order of instructions in your prompts can significantly influence the results. The way you sequence your requirements could highlight certain aspects over others. Experiment with different structures to find the most effective approach.

Exercise: Use these two prompts in your test automation tool and observe the differences in the generated test cases.

Prompt 1 (instruction-context-example): Create a test case for a shopping cart feature. The cart should update quantities correctly. For example, adding two items should reflect the correct total.

Prompt 2 (context-example-instruction): The shopping cart feature should update quantities correctly, like reflecting the correct total when adding two items. Based on this, create a test case for the shopping cart feature.

Tip: Iterate and Refine. The initial result might not always be the best. If the first attempt doesn’t meet your expectations, refine your prompt and try again to improve the outcome.

Exercise: Perform the following iterative prompts in your test automation environment. Notice how the quality of test scripts improves with each iteration.

Prompt 1: Write a script to test the search functionality in an e-commerce application.

Prompt 2: Develop a comprehensive script to test the search functionality in an e-commerce application, including tests for search result accuracy and response time. Prompt 3: Create an extensive test script for an e-commerce application’s search functionality. It should include tests for search result accuracy, response time, and handling of no-results scenarios. The script should be compatible with cross-browser testing and use data-driven testing techniques to cover a wide range of search queries.

--

--