Gemini Safety Filtering
When using Gemini AI for tasks like translation or speech recognition, you might encounter errors such as "Response was blocked due to safety settings."
This happens because Gemini has safety restrictions on the content it processes. Although the code allows for some adjustments, including setting the "Block None" option for the loosest restrictions, the final decision on whether to filter content is still made by Gemini's overall evaluation.
Gemini API's adjustable safety filters cover the following categories. Content outside these categories cannot be adjusted via code:
Category | Description |
---|---|
Harassment | Negative or harmful comments targeting identity and/or protected attributes. |
Hate Speech | Rude, disrespectful, or profane content. |
Sexually Explicit | Content containing references to sexual acts or other obscene material. |
Dangerous Content | Promotes, facilitates, or enables harm. |
Civic Integrity | Queries related to elections. |
The table below describes the blocking settings you can adjust in your code for each category.
For example, if you set the blocking setting for the Hate Speech category to Block Few, the system will block all parts with a high probability of containing hate speech content. However, it will allow any parts with a low probability of containing dangerous content.
Threshold (Google AI Studio) | Threshold (API) | Description |
---|---|---|
Block Nothing | BLOCK_NONE | Always show, regardless of the likelihood of unsafe content. |
Block Few | BLOCK_ONLY_HIGH | Block when the probability of unsafe content is high. |
Block Some | BLOCK_MEDIUM_AND_ABOVE | Block when the probability of unsafe content is medium or high. |
Block Most | BLOCK_LOW_AND_ABOVE | Block when the probability of unsafe content is low, medium, or high. |
Not Applicable | HARM_BLOCK_THRESHOLD_UNSPECIFIED | Threshold not specified; uses the default threshold for blocking. |
You can enable BLOCK_NONE
in your code using the following settings:
safetySettings = [
{
"category": HarmCategory.HARM_CATEGORY_HARASSMENT,
"threshold": HarmBlockThreshold.BLOCK_NONE,
},
{
"category": HarmCategory.HARM_CATEGORY_HATE_SPEECH,
"threshold": HarmBlockThreshold.BLOCK_NONE,
},
{
"category": HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
"threshold": HarmBlockThreshold.BLOCK_NONE,
},
{
"category": HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
"threshold": HarmBlockThreshold.BLOCK_NONE,
},
]
model = genai.GenerativeModel('gemini-2.0-flash-exp')
model.generate_content(
message,
safety_settings=safetySettings
)
However, it is important to note that even when all categories are set to BLOCK_NONE
, Gemini may still filter content based on its assessment of safety within the context.
How to Reduce the Probability of Encountering Safety Restrictions?
Generally, the Flash series has stricter safety restrictions compared to the Pro and Thinking series models. Try switching to different models.
Additionally, when dealing with potentially sensitive content, sending smaller amounts of content at a time and reducing the context length can also help reduce the frequency of safety filtering.
How to Completely Disable Gemini's Safety Checks and Allow All Content?
Bind a foreign credit card and switch to a paid premium account.