Organization Clock - Eastern Time (ET) ()
Hack University Hacker Guide

Judging Criteria

Every project is reviewed across the same six categories. Teams should still frame their story around one primary path, but the scoring model stays consistent so judges can compare work clearly.

How Projects Are Scored

Judges score each submission from 1 to 5 in every category. The final score is a weighted average, but in practice the six areas are balanced evenly, so weak packaging or weak presentation can hurt even strong technical work.

Technical Difficulty

Measures the complexity of the problem and the sophistication of the implementation.

  • 1: Very basic implementation with minimal challenge.
  • 3: Moderate technical complexity with some harder elements.
  • 5: Sophisticated build demonstrating advanced knowledge.

Functionality

Measures reliability, completeness, and whether the project actually works as demonstrated.

  • 1: Barely functional with major bugs or missing pieces.
  • 3: Mostly functional with minor issues.
  • 5: Reliable, polished, and robust in the demo.

Innovation and Creativity

Measures originality, novelty, and how much the solution stands out from standard approaches.

  • 1: Mostly derivative.
  • 3: Some fresh thinking but still familiar.
  • 5: Clearly original and creatively executed.

Design and User Experience

Measures usability, accessibility, clarity, and overall product or presentation polish.

  • 1: Confusing or hard to use.
  • 3: Clear enough with a usable flow.
  • 5: Intuitive, accessible, and polished.

Impact and Usefulness

Measures practical value, significance of the problem, and real-world relevance.

  • 1: Limited usefulness or trivial scope.
  • 3: Useful for a clear audience.
  • 5: Strong potential impact on a meaningful problem.

Presentation and Demo

Measures how clearly the team explains the project and how effective the live demo is. Live demos are expected for both virtual and in-person presentations.

  • 1: Unclear presentation, weak demo flow, or no completed live demo. Teams that do not complete the live demo may lose all points in this category.
  • 3: Clear explanation with adequate live demo coverage.
  • 5: Compelling presentation with sharp communication and a strong live demo.

Weighting

Each category is effectively weighted the same at 16.7%. That means teams should avoid over-indexing on only one strength.

Technical DifficultyAdvanced implementation depth and complexity.
16.7%
Rewards challenging engineering, system design, or technical method quality.
FunctionalityHow well the project works in reality.
16.7%
Rewards working features, stability, and completeness.
Innovation and CreativityOriginality of the idea or execution.
16.7%
Rewards novel angles, standout features, and fresh solutions.
Design and User ExperienceEase of use and presentation polish.
16.7%
Rewards intuitive flow, visual clarity, and accessibility.
Impact and UsefulnessValue of the problem and solution.
16.7%
Rewards meaningful relevance and clear benefit to users or decision-makers.
Presentation and DemoCommunication quality.
16.7%
Rewards a clear project story, sharp live walkthrough, and confident delivery. Backup videos are only for technical difficulties and carry zero points for presentation and demo.

What Each Competition Path Should Emphasize

Technical Competition

  • Working implementation and clear architecture matter most.
  • Judges want proof the product actually runs.
  • Strong repos, setup instructions, and feature demos help immediately.

Start-Up Competition

  • Judges want a clear customer problem and market logic.
  • The pitch deck and prototype story matter as much as the build.
  • Customer value, traction logic, and business framing help most.

Analytics Competition

  • Useful insight and sound method matter most.
  • Explain the data source, workflow, and decision value clearly.
  • Clean charts, dashboards, and interpretable models score better.

Judging Process

The review process is structured to make public materials and recorded media enough for judges to evaluate the project, even if they revisit it later.

1

Initial Screening

Judges verify that the submission meets baseline requirements, follows the rules, and includes public repo access, videos, slides, README, citations, and competition type.

2

Project Review

Reviewers inspect the GitHub repository, slide deck, and linked videos to understand what the team built and how it fits the chosen path.

3

Evaluation Scoring

Each judge scores the project from 1 to 5 across all six criteria, then those scores are averaged and weighted.

4

Deliberation

Judges compare outcomes, discuss edge cases, resolve ties, and review standout projects more closely.

5

Winner Selection

Final rankings are determined after deliberation. Judges may reference the 5-minute backup demo only if technical difficulties affect the live presentation, but the backup video carries zero points for the Presentation and Demo category.