Knowledge base of AI resources for GWSB.
The GW School of Business (GWSB) invites faculty and staff to submit proposals for the GWSB AI in Action Award, which recognizes uses of artificial intelligence (AI) that advance teaching, research, and operations at the School. Along with GWSB’s commitment to applied and responsible AI, this prize is intended to catalyze practical experimentation and knowledge sharing across the GWSB community.
Submissions may include completed or planned projects, such as tools, workflows, course innovations, research papers or reports, prototypes, or other approaches that demonstrate how AI can improve learning, decision-making, or institutional effectiveness. Proposals will be evaluated based on:
Awards will include monetary prizes and School-wide recognition, with the goal of amplifying AI work happening across the School and encouraging broader engagement with AI.
Applicants should submit a short document (approximately 2–3 pages) that briefly addresses each of the evaluation criteria—Novelty & Distinctiveness, Impact & Reach, Reusability & Transferability, Education & Knowledge Sharing, Ethical & Responsible Use, and Effectiveness & Execution—at a level appropriate to the project’s stage of development. Submissions may describe completed work from the past 18 months or well-defined planned projects and should focus on clarity, intent, and potential value to the GWSB community.
Applicants are welcome to include supplementary materials as appendices or links, such as implementation plans, prototypes, code, datasets, papers, reports, sample assignments, or other relevant artifacts. Submissions with clear plans for education or dissemination (e.g., short seminar(s)) will score more favorably.
Submissions are due by March 15, 2026. Awards will be announced beginning April 10, 2026. Submit proposals by email to jphall@gwu.edu with the subject line: AI Award Proposal.
Proposals should adhere to GW data and AI policies and will be scored using a structured rubric. Proposals will be evaluated and scored using GW-approved AI tools.
All submissions must meet institutional policies related to AI use, data privacy, and data security to be considered for an award. The most relevant policies to review prior to submitting a proposal include AI best practices, data protection guidance, and AI privacy guidance.
Submissions will be scored from 1–3 points across each competition dimension using the following rubric. The maximum possible score is 18 points.
| Dimension | Description | 1 Point | 2 Points | 3 Points |
|---|---|---|---|---|
| Novelty & Distinctiveness | How original is the submission—idea, approach, or contribution—relative to existing AI or IT-related work within the school? | Largely incremental or similar to existing work already present at the school | Moderately novel adaptation, synthesis, or extension of existing ideas or tools | Clearly distinct, original, or first-of-its-kind contribution within the school |
| Impact & Reach | To what extent does the submission (or work) positively affect students, faculty, staff, or school operations? | Benefits a narrow group or limited use case | Benefits multiple users, courses, units, or workflows | Broad or potentially systemic impact across programs, departments, or functions |
| Reusability & Transferability | How easily can others reuse, adapt, incorporate, or build upon the submission or its outputs? | Difficult to reuse or transfer without substantial effort | Reusable with moderate documentation, context, or support | Easy to reuse, adapt, or extend |
| Education & Knowledge Sharing | How effectively do the proposers plan to document, explain, or share the submission or its insights with others? | Limited or informal plans for sharing or dissemination | Clear but modest plans for documentation, presentation, or dissemination; willingness to host a short seminar | Strong, intentional plans for teaching, documentation, or broad knowledge sharing; willingness to host a short seminar |
| Ethical & Responsible Use | Does the submission’s use of AI result in explainable processes or positive outcomes across all intended user groups? | No consideration for AI transparency or fairness across user groups | Consideration of increased AI transparency or fairness across user groups | Actionable or tangible enhancements that increase AI transparency or fairness across user groups, potentially including formal evaluation results |
| Effectiveness & Execution | How well is the submission articulated, demonstrated, or supported by evidence, given its stage of development? | Early-stage or conceptual, with limited evidence or clarity | Clearly developed, piloted, or argued, with some supporting evidence | Well-executed or well-developed plans, clearly communicated, and supported by strong evidence or demonstration |