Sergey Brin’s remarks and the broader context tell us about interacting with AI for better results:
1. Brin’s "Threaten AI" Joke — Is There Any Truth?
-
Sergey Brin joked that AI models respond better if you “threaten them with physical violence,” a tongue-in-cheek statement highlighting that sometimes more forceful, commanding prompts can yield sharper or more direct answers.
-
This contrasts with the common polite usage — “please,” “thank you” — which is more a human social habit than a necessity when dealing with machines.
-
Brin’s quip pokes fun at the fact that AI isn’t a human being; it has no feelings, so politeness doesn’t inherently improve results.
2. Why Might "Threatening" or Stronger Commands Work Better?
-
AI models are trained on vast amounts of text, including both polite and demanding language.
-
Sometimes, clearer, more direct, or urgent prompts may guide the AI to prioritize certain types of responses.
-
A strong or urgent tone might cause the AI to “focus” on answering more precisely or fully, as it’s interpreting the prompt as a higher-priority task.
3. The Politeness Debate
-
Some research shows that polite prompts can sometimes improve responses, maybe because of how models are trained on conversational data.
-
However, studies like the 2024 “Should We Respect LLMs?” paper show results are mixed — politeness sometimes helps, sometimes makes no difference, or even slightly hampers performance.
-
OpenAI’s Sam Altman mocked polite prompting as a “waste” of computing resources, implying users should be more straightforward.
4. The Shift in Prompt Engineering
-
Early on, carefully crafting prompts was crucial to get good AI responses — a whole field of "prompt engineering" emerged.
-
Now, many users simply ask AI to help generate or optimize prompts themselves.
-
AI-powered prompt tuning tools and evolving models make manual prompt engineering less critical, leading some to declare it "dead" or obsolete.
5. Brin’s Comeback & AI’s Importance
-
Brin’s return to Google, driven by his excitement about AI, shows how seriously tech leaders are taking the rapidly evolving field.
-
Working on projects like Google’s Gemini models means deep involvement in making AI smarter, more responsive, and useful.
Takeaway: How to Get Better AI Results?
-
Being clear and direct in your prompts is generally better than overly polite or vague phrasing.
-
Experiment with tone and command strength — sometimes firm instructions can help.
-
Don’t stress over politeness—AI doesn’t have feelings, but natural conversational phrasing can sometimes help with context.
-
Use AI itself to help you craft better prompts.
-
Keep an eye on research and new tools that optimize prompting.