ImageSafesearchContentOffensiveSymbolDetection

AI Overview😉

  • The potential purpose of this module is to detect and identify offensive symbols or images within search results, likely to filter out or demote content that contains harmful or inappropriate material. This module is part of the "SafeSearch" feature, which aims to provide a safer and more family-friendly search experience.
  • This module could impact search results by flagging or removing content that contains offensive symbols or images, which may lead to a cleaner and more suitable search experience for users, especially children. This could also lead to a decrease in the visibility of websites that host or promote harmful content.
  • A website may change things to be more favorable for this function by ensuring that their content is free from offensive symbols or images, using appropriate and respectful language, and adhering to Google's guidelines for SafeSearch. Additionally, websites can use alt text and descriptive text for images to help search engines understand the content of the images, making it easier to detect and filter out offensive material.

Interesting Module? Vote 👇

Voting helps other researchers find interesting modules.

Current Votes: 0

GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOffensiveSymbolDetection (google_api_content_warehouse v0.4.0)

Attributes

  • matches (type: list(GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOffensiveSymbolMatch.t), default: nil) -

Summary

Types

t()

Functions

decode(value, options)

Unwrap a decoded JSON object into its complex fields.

Types

Link to this type

t()

@type t() ::
  %GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOffensiveSymbolDetection{
    matches:
      [
        GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOffensiveSymbolMatch.t()
      ]
      | nil
  }

Functions

Link to this function

decode(value, options)

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.