ImageSafesearchContentBrainPornAnnotation

AI Overview😉

  • The potential purpose of this module is to detect and filter out explicit or harmful content from Google's image search results, such as child abuse, pornography, violence, and other inappropriate images. This module appears to be part of Google's SafeSearch feature, which aims to provide a safer and more family-friendly search experience.
  • This module could impact search results by removing or demoting images that are deemed inappropriate or harmful, which could lead to a cleaner and more respectful search experience for users. However, it may also lead to false positives or over-filtering, which could result in relevant images being omitted from search results.
  • A website may change things to be more favorable for this function by ensuring that their images are appropriately labeled and tagged, and that they do not contain explicit or harmful content. Additionally, websites can optimize their images to be more relevant and respectful, which could help them avoid being flagged by this module. It's also important to note that websites should not attempt to manipulate or deceive this module, as this could lead to penalties or removal from search results.

Interesting Module? Vote 👇

Voting helps other researchers find interesting modules.

Current Votes: 0

GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentBrainPornAnnotation (google_api_content_warehouse v0.4.0)

Don't change the field names. The names are used as sparse feature labels in client projects.

Attributes

  • childScore (type: number(), default: nil) - The probability that the youngest person in the image is a child.
  • csaiScore (type: float(), default: nil) - This score correlates with potential child abuse. Google confidential!
  • csamA1Score (type: number(), default: nil) - Experimental score. Do not use. Google confidential!
  • csamAgeIndeterminateScore (type: number(), default: nil) - Experimental score. Do not use. Google confidential!
  • iuInappropriateScore (type: number(), default: nil) - This field contains the probability that an image is inappropriate for Images Universal, according to this policy: go/iupolicy.
  • medicalScore (type: number(), default: nil) -
  • pedoScore (type: number(), default: nil) -
  • pornScore (type: float(), default: nil) -
  • racyScore (type: number(), default: nil) - This score is related to an image being sexually suggestive.
  • semanticSexualizationScore (type: number(), default: nil) - This score is related to racy/sexual images where scores have semantic meaning from 0 to 1.
  • spoofScore (type: number(), default: nil) -
  • version (type: String.t, default: nil) -
  • violenceScore (type: number(), default: nil) -
  • ytPornScore (type: number(), default: nil) - Deprecated, use porn_score instead. The most recent model version does not produce this anymore.

Summary

Types

t()

Functions

decode(value, options)

Unwrap a decoded JSON object into its complex fields.

Types

Link to this type

t()

@type t() ::
  %GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentBrainPornAnnotation{
    childScore: number() | nil,
    csaiScore: float() | nil,
    csamA1Score: number() | nil,
    csamAgeIndeterminateScore: number() | nil,
    iuInappropriateScore: number() | nil,
    medicalScore: number() | nil,
    pedoScore: number() | nil,
    pornScore: float() | nil,
    racyScore: number() | nil,
    semanticSexualizationScore: number() | nil,
    spoofScore: number() | nil,
    version: String.t() | nil,
    violenceScore: number() | nil,
    ytPornScore: number() | nil
  }

Functions

Link to this function

decode(value, options)

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.