AI Usage Policy

DataTalks.Club zoomcamps allow AI tools (ChatGPT, Claude, Gemini, Cursor, GitHub Copilot, and similar) for homework, projects, and peer reviews. This page explains what is expected and what is counterproductive.

The policy

Use AI freely, but understand what you submit.

The expectation is not that you avoid AI - it is that you stay engaged with the work. If an AI generates code, read the code. If an AI suggests an architecture, understand why. If you submit something you cannot explain, you have wasted the learning opportunity.

In homework

AI is allowed for homework. Most homework asks for a numerical answer or a small piece of code. There is no enforcement against using AI to solve homework, and we do not check.

If you do use AI:

  • Make sure you understand the answer. Homework is your check that you absorbed the module.
  • Try the question yourself first. AI is most useful for getting unstuck, not for skipping the work.

In the project

AI is allowed for the project too. Many participants use Cursor, Claude, or similar coding assistants to speed up development.

What we ask:

  • Understand every decision. If a peer reviewer asks why you chose a particular orchestrator or schema, you should be able to explain it.
  • Document your AI usage in the README if AI shaped major decisions. This is good practice and helps reviewers.
  • The project should reflect your understanding, not just AI output. A project where everything was clearly generated without engagement is unlikely to teach you anything and will not strengthen your portfolio.

We cannot enforce this. Some people will still ask AI to implement everything end to end. The “vibe coding” approach is technically possible. The cost is yours - you do not get the learning benefit, and the result is a portfolio piece you cannot defend in an interview.

In peer review

AI is allowed for drafting reviews. But if you let the LLM do the whole thing without actually looking at the project, you lose the main reason peer review exists - reading other people’s code is one of the best learning opportunities in the course.

If you suspect a project you are reviewing was AI-generated end to end and you do not feel like reproducing everything, do not feel obligated to. Score against the rubric (does the README cover the criteria, is the data warehouse used correctly, etc.) and move on. You do not have to grade harder because you suspect AI use - just review what is in front of you.

Disclosing AI usage

You are not required to disclose AI usage. There is no special form, no honor code to sign. We trust participants to use AI in a way that helps them learn.

If your project README mentions “I used Cursor and Claude during development”, that is helpful for reviewers but not required.

What good AI usage looks like

  • Asking AI to explain a concept you found confusing in the lecture.
  • Using a coding assistant to write boilerplate (Dockerfiles, Terraform stubs, dbt model skeletons).
  • Asking AI to debug an error message before posting in Slack.
  • Using AI to draft your README and then editing it.
  • Asking AI to suggest improvements to your project, then evaluating which suggestions to apply.

What counterproductive AI usage looks like

  • Asking AI to do the project end to end, then submitting without reading the code.
  • Asking AI to write your peer review without looking at the project.
  • Asking AI to write your learning-in-public post about a module you have not actually engaged with - the post will be generic and will not earn you the engagement those posts are meant to build.