The Scientist's Guide to Prompting

Most people treat AI like a search engine. This is a mistake.

To get value out of these tools, you need to understand that prompting is a hierarchy. It ranges from simple questions to massive, multi-page workflows that simulate an expert employee. Below, we break down the levels of prompting, culminating in the "Mega Prompts" used in our lab for manuscript review and grant writing.


1. The Hierarchy of Complexity

Level 1: The Chat
"The Search Replacement"

Goal: Quick info retrieval.
Example: "What is a Hidden Markov Model?"
Value: Low. It saves you a Google search, but adds no intellectual depth.

Level 2: The Task
"The Intern"

Goal: Execute a specific job.
Example: "Edit this paragraph to be punchier. Do not change the meaning."
Value: Medium. Saves time on tedious tasks like coding snippets or editing abstract text.

Level 3: The Workflow
"The Mega-Prompt"

Goal: Simulate a senior colleague.
Example: Uploading a full manuscript with a 5-page rubric and asking for a section-by-section critique based on specific journal guidelines.
Value: High. This is where AI transforms research productivity.

2. The "Mega-Prompt" Library

This is a repository of "Algorithmic Workflows"—complex, multi-stage instructions designed to force AI models to simulate expert roles.

How to use: Click the link to download the prompt text. Paste the entire block into a large-context model chat (like the top line models from ChatGPT, Claude, or Gemini). For best results, follow the instructions carefully most will require that you also upload documents to the chat. In models where you can select the level of thinking or speed always choose the "thinking" version and if possible use the paid version (typically you will be able to make use of more compute resources with paid versions). Access to some of these is provided by TAMU

Research & Writing Workflows (The "Scientific Method" Engines)

  • The Manuscript Audit: Acts as an expert editor. Reviews your paper and helps you understand what needs to change. Think of it as a kind peer reviewer.
  • The "Response to Reviewers" Diplomat: Takes raw/angry author comments and Reviewer 2’s critique, then outputs a table with "The Sandwich Method" (Agree, Refute w/ Evidence, Fix).
  • The "Lay Summary" Translator: Converts dense genomic abstracts into 8th-grade reading level press releases, specifically flagging jargon like "heterozygosity."
  • The "DDIG/Small Grant" Architect: Builds the narrative arc for a Dissertation Improvement Grant, ensuring the budget justification aligns perfectly with experiments.
  • The "Literature Gap" Analyzer: Takes 5-10 abstracts as input and identifies the specific "knowledge conflict" that allows you to frame your research novelty.

Code & Data Science (The "Reproducibility" Engines)

Teaching & Mentoring (The "Pedagogy" Engines)

Graduate Student Survival (The "Stress Test" Engines)

Career & Administration (The "Bureaucracy" Engines)

Lab Management (The "Safety" Engines)