-
-
Notifications
You must be signed in to change notification settings - Fork 986
Update guidelines on using GenAI #1778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
726ec3c
e71105b
f8ab6ab
a04cd52
ef31ff8
ff94cd6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -1,13 +1,31 @@ | ||||||||||||||
| .. _generative-ai: | ||||||||||||||
|
|
||||||||||||||
| ============= | ||||||||||||||
| Generative AI | ||||||||||||||
| ============= | ||||||||||||||
| ================================= | ||||||||||||||
| Guidelines in using Generative AI | ||||||||||||||
| ================================= | ||||||||||||||
|
Comment on lines
+3
to
+5
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||
|
|
||||||||||||||
| Generative AI tools have evolved rapidly, and their suggested results can be helpful. As with using any tool, the resulting contribution is | ||||||||||||||
| the responsibility of the contributor. We value good code, concise accurate documentation, and avoiding unneeded code | ||||||||||||||
| churn. Discretion, good judgment, and critical thinking are the foundation of all good contributions, regardless of the | ||||||||||||||
| tools used in their creation. | ||||||||||||||
| Generative AI tools are evolving rapidly, and their work can be helpful. As with using any tool, the resulting | ||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It wasn't done before in this file for some reason, but could we please wrap lines?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was going to say the opposite :) The rewrap make it hard to review what has changed. Please can we keep a minimal diff for now, and only rewrap just before merge?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just before merge sounds good to me :-)
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think we should personify the tools any longer. I also want to be more objective about the results.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perhaps just simplify to: "AI tools can produce results quickly. As with using any ..." FWIW, "their work" was not intended as personification, it a extremely common phrase for how we refer to any machines or really any objects serving any purpose in English. good bad or indifferent. We should remove the word "Generative" to simplify this further. "produce output" reads as rather a diss on AI models and agentic harnesses that belittles their (non-personified) capabilities that would make for a somewhat dated feeling policy. "results" is more what people are looking for and encompasses all sorts of actions taken. I don't like saying "produce output" for the same reason that "generative" doesn't quite have the right ring to it. Internally things done by agentic AI are is technically "output" in the output tokens turning into tool calls that iteratively converge on the goals we asked for sense. But AI users see the end result rather than how it happened inside.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "produce output quickly" is pretty different from the original intent of this sentence, which to me is talking about the rate of change of tools themselves The speed of generation is not really relevant in my mind, but the acknowledgement that our guidelines may not contain everything that is most helpful to the reader in whatever latest state of the world they find themself in is
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @gpshead @hauntsaninja When I originally authored the doc back in October 2024, the industry and tool availability was very different than today. I think it makes good sense to drop the term Generative AI in most of this document and go with the simpler term "AI tools". |
||||||||||||||
| contribution is the responsibility of the contributor. We value good code, concise accurate documentation, | ||||||||||||||
| and avoiding unneeded code churn. Discretion, good judgment, and critical thinking are the foundation of all good | ||||||||||||||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||
| contributions, regardless of the tools used in their creation. | ||||||||||||||
|
|
||||||||||||||
| Considerations for success | ||||||||||||||
| ========================== | ||||||||||||||
|
|
||||||||||||||
| Authors must review the work done by AI tooling in detail to ensure it actually makes sense before proposing it as a PR. | ||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||
|
|
||||||||||||||
| We expect PR authors to be able to explain their proposed changes in their own words. | ||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||
|
|
||||||||||||||
| Disclosure of the use of AI tools in the PR description is appreciated, while not required. Be prepared to explain how | ||||||||||||||
| the tool was used and what changes it made. | ||||||||||||||
|
Comment on lines
+19
to
+20
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Looks like some funky line breaking?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I had it to break after 120 characters. But now that I read the devguide's Rst markup doc, seems like we're supposed to break at 80 characters. |
||||||||||||||
|
|
||||||||||||||
| Whether you are using Generative AI or not, keep the following principles in mind for the quality | ||||||||||||||
| of your contribution: | ||||||||||||||
|
|
||||||||||||||
| - Consider whether the change is necessary | ||||||||||||||
| - Make minimal, focused changes | ||||||||||||||
| - Follow existing coding style and patterns | ||||||||||||||
| - Write tests that exercise the change | ||||||||||||||
|
Comment on lines
+25
to
+28
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should we add another bullet point along the lines of: perhaps a follow paragraph after this list: "Pay close attention to your AI's testing behavior. Have conversations with your AI model about the appropriateness of changes given these principles before you propose them."
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would rather that we not personify the tools @gpshead. Perhaps: "Pay close attention to an AI tool's recommendations for testing changes. Provide input about Python's testing principles before requests to the AI tool's model. Always review the AI tool's output before opening a pull request or issue." |
||||||||||||||
|
|
||||||||||||||
| Acceptable uses | ||||||||||||||
| =============== | ||||||||||||||
|
|
@@ -21,20 +39,10 @@ Some of the acceptable uses of generative AI include: | |||||||||||||
| Unacceptable uses | ||||||||||||||
| ================= | ||||||||||||||
|
|
||||||||||||||
| Maintainers may close issues and PRs that are not useful or productive, including | ||||||||||||||
| those that are fully generated by AI. If a contributor repeatedly opens unproductive | ||||||||||||||
| issues or PRs, they may be blocked. | ||||||||||||||
| Maintainers may close issues and PRs that are not useful or productive, regardless of whether | ||||||||||||||
| AI was used or not. | ||||||||||||||
|
|
||||||||||||||
| Considerations for success | ||||||||||||||
| ========================== | ||||||||||||||
| - While AI assisted tools such as autocompletion can enhance productivity, they sometimes rewrite entire code blocks instead of making small, focused edits. | ||||||||||||||
| This can make it more difficult to review changes and to fully understand both the original intent of the code and the rationale behind the new modifications. | ||||||||||||||
| Maintaining consistency with the original code helps preserve clarity, traceability, and meaningful reviews and also helps us avoid unnecessary code churn. | ||||||||||||||
| - Sometimes AI assisted tools make failing unit tests pass by altering or bypassing the tests rather than addressing the underlying problem in the code. | ||||||||||||||
| Such changes do not represent a real fix. Authors must review the work done by AI tooling in detail to ensure it actually makes sense before proposing it as a PR. | ||||||||||||||
| - Keep the following principles for the quality of your contributions in mind whether you use generative AI or not: | ||||||||||||||
|
|
||||||||||||||
| - Consider whether the change is necessary | ||||||||||||||
| - Make minimal, focused changes | ||||||||||||||
| - Follow existing coding style and patterns | ||||||||||||||
| - Write tests that exercise the change | ||||||||||||||
| If a contributor repeatedly opens unproductive issues or PRs, they may be blocked. | ||||||||||||||
|
|
||||||||||||||
| Sometimes AI assisted tools make failing unit tests pass by altering or bypassing the tests rather than addressing the | ||||||||||||||
| underlying problem in the code. Such changes do not represent a real fix and are not acceptable. | ||||||||||||||
|
Comment on lines
+47
to
+48
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd like to see this worded in more general terms rather than using such a specific example (older models did this a lot more than 2026's). What this is really getting at is that we want people to be cautious about reward hacking rather than addressing the actual underlying problem in a backwards compatible manner. maybe something along the lines of: "Some models have had a tendency of reward hacking by making incorrect changes to fix their limited context view of the problem at hand rather than focusing on what is correct. Including altering or bypassing existing tests. Such changes do not represent a real fix and are not acceptable."
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this can be generalized beyond AI tools to humans as well.
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Some AI tools may provide responses to a user's prompt that diverge from recommended practices since the AI tool may not have been trained on the full context of the problem and recommended practices. Sometimes, due to limited context, the tool will alter or bypass existing tests. Such changes do not offer a real fix and are not acceptable."
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd avoid using the word "trained" as that has a specific meaning in the AI field that isn't really the reason. Focusing on "context" is good as that's the important and widely known term used in AI. Just "... the AI tool may not have the full context of the problem and recommended practices. You need to provide it that.". I've never really liked the "Sometimes, due to limited context, the tool will alter or bypass existing tests." example as it is dated for anyone using the latest models (not everyone is... an entirely different access problem that thus makes general purpose vague docs like this hard). But it felt like we should keep some form of an example undesirable behavior from an insufficiently guided model in here in order to make the more important "Such changes do not offer a real fix and are not acceptable." be tied to a concrete example. So absent clearly better ideas, and knowing some users will be using lesser models, it still fits.
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Agreed. Proposing: Due to limited context, an AI tool may alter or bypass existing tests. Such changes do not offer a real fix and are not acceptable." |
||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.