ImageSafesearchContentOffensiveSymbolMatch

AI Overview😉

  • The potential purpose of this module is to identify and flag images that contain offensive symbols, allowing Google to filter out or demote them in search results to maintain a safe and respectful user experience.
  • This module could impact search results by removing or lowering the ranking of images that contain offensive symbols, making it less likely for users to encounter harmful or disturbing content. This could be particularly important for users who are searching for sensitive or family-friendly content.
  • A website may change things to be more favorable for this function by ensuring that their images do not contain offensive symbols, using alternative images or graphics that convey the same message in a respectful manner, and providing clear and accurate alt text and descriptions for their images to help Google's algorithm understand their content.

Interesting Module? Vote 👇

Voting helps other researchers find interesting modules.

Current Votes: 0

GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOffensiveSymbolMatch (google_api_content_warehouse v0.4.0)

Each entry corresponds to an image containing an offensive symbol.

Attributes

  • score (type: number(), default: nil) - Confidence score of the match. The higher, the more likely to match the symbol.
  • type (type: String.t, default: nil) -

Summary

Types

t()

Functions

decode(value, options)

Unwrap a decoded JSON object into its complex fields.

Types

Link to this type

t()

@type t() ::
  %GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOffensiveSymbolMatch{
    score: number() | nil,
    type: String.t() | nil
  }

Functions

Link to this function

decode(value, options)

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.