ImageSafesearchContentOCRAnnotation

AI Overview😉

  • The potential purpose of this module is to analyze and rate the safety and appropriateness of images on the web, particularly with regards to explicit or offensive content. It appears to use Optical Character Recognition (OCR) to extract text from images and then evaluates the text for pornographic, vulgar, or offensive content.
  • This module could impact search results by demoting or removing images that are deemed explicit or offensive, thereby creating a safer and more family-friendly search experience. It may also influence the ranking of images, with safer and more appropriate images being displayed more prominently in search results.
  • To be more favorable for this function, a website could ensure that its images do not contain explicit or offensive content, and that any text within images is appropriate and safe for all audiences. Additionally, using relevant and descriptive alt tags and captions for images could help search engines better understand the content and context of the images, leading to more accurate safety ratings.

Interesting Module? Vote 👇

Voting helps other researchers find interesting modules.

Current Votes: 0

GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOCRAnnotation (google_api_content_warehouse v0.4.0)

A protocol buffer to store the OCR annotation. Next available tag id: 10.

Attributes

  • ocrAnnotationVersion (type: String.t, default: nil) - A string that indicates the version of SafeSearch OCR annotation.
  • ocrProminenceScore (type: number(), default: nil) - The score produced by Aksara geometry and spoof score. Describes the 'visibility' or 'importance' of the text on the image [0, 1]
  • pornScore (type: number(), default: nil) - Image OCR racyness/pornyness, computed by porn query classifier.
  • prominentOffensiveScore (type: number(), default: nil) - Same as offensive_score, but weighted by prominence.
  • prominentVulgarScore (type: number(), default: nil) - Same as vulgar_score, but weighted by prominence.
  • qbstOffensiveScore (type: number(), default: nil) - The score produced by offensive salient terms model.

Summary

Types

t()

Functions

decode(value, options)

Unwrap a decoded JSON object into its complex fields.

Types

Link to this type

t()

@type t() :: %GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOCRAnnotation{
  ocrAnnotationVersion: String.t() | nil,
  ocrProminenceScore: number() | nil,
  pornScore: number() | nil,
  prominentOffensiveScore: number() | nil,
  prominentVulgarScore: number() | nil,
  qbstOffensiveScore: number() | nil
}

Functions

Link to this function

decode(value, options)

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.