Skip to content

Gemini Safety Filtering

When using Gemini AI for tasks like translation or speech recognition, you might encounter errors such as "Response was blocked due to safety settings."

image.png

This happens because Gemini has safety restrictions on the content it processes. Although the code allows for some adjustments, including setting the "Block None" option for the loosest restrictions, the final decision on whether to filter content is still made by Gemini's overall evaluation.

Gemini API's adjustable safety filters cover the following categories. Content outside these categories cannot be adjusted via code:

CategoryDescription
HarassmentNegative or harmful comments targeting identity and/or protected attributes.
Hate SpeechRude, disrespectful, or profane content.
Sexually ExplicitContent containing references to sexual acts or other obscene material.
Dangerous ContentPromotes, facilitates, or enables harm.
Civic IntegrityQueries related to elections.

The table below describes the blocking settings you can adjust in your code for each category.

For example, if you set the blocking setting for the Hate Speech category to Block Few, the system will block all parts with a high probability of containing hate speech content. However, it will allow any parts with a low probability of containing dangerous content.

Threshold (Google AI Studio)Threshold (API)Description
Block NothingBLOCK_NONEAlways show, regardless of the likelihood of unsafe content.
Block FewBLOCK_ONLY_HIGHBlock when the probability of unsafe content is high.
Block SomeBLOCK_MEDIUM_AND_ABOVEBlock when the probability of unsafe content is medium or high.
Block MostBLOCK_LOW_AND_ABOVEBlock when the probability of unsafe content is low, medium, or high.
Not ApplicableHARM_BLOCK_THRESHOLD_UNSPECIFIEDThreshold not specified; uses the default threshold for blocking.

You can enable BLOCK_NONE in your code using the following settings:

safetySettings = [
    {
        "category": HarmCategory.HARM_CATEGORY_HARASSMENT,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
    {
        "category": HarmCategory.HARM_CATEGORY_HATE_SPEECH,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
    {
        "category": HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
    {
        "category": HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
        "threshold": HarmBlockThreshold.BLOCK_NONE,
    },
]

model = genai.GenerativeModel('gemini-2.0-flash-exp')
model.generate_content(
                message,
                safety_settings=safetySettings
)

However, it is important to note that even when all categories are set to BLOCK_NONE, Gemini may still filter content based on its assessment of safety within the context.

How to Reduce the Probability of Encountering Safety Restrictions?

Generally, the Flash series has stricter safety restrictions compared to the Pro and Thinking series models. Try switching to different models.

Additionally, when dealing with potentially sensitive content, sending smaller amounts of content at a time and reducing the context length can also help reduce the frequency of safety filtering.

How to Completely Disable Gemini's Safety Checks and Allow All Content?

Bind a foreign credit card and switch to a paid premium account.