PornFlagData

AI Overview😉

  • The potential purpose of this module is to detect and classify explicit or offensive content in images, such as pornography, violence, or offensive symbols, to help filter out inappropriate results from search queries.
  • This module could impact search results by demoting or removing images that are classified as explicit or offensive, especially for non-porn queries when SafeSearch is Off. This could lead to a safer and more family-friendly search experience.
  • A website may change things to be more favorable for this function by ensuring that their images do not contain explicit or offensive content, or by providing alternative images that are safe for all audiences. Additionally, websites could provide metadata or context about their images to help the classifier make more accurate decisions.

Interesting Module? Vote 👇

Voting helps other researchers find interesting modules.

Current Votes: 1

GoogleApi.ContentWarehouse.V1.Model.PornFlagData (google_api_content_warehouse v0.4.0)

A protocol buffer to store the url, referer and porn flag for a url. and an optional image score. Next available tag id: 51.

Attributes

  • debugInfo (type: list(GoogleApi.ContentWarehouse.V1.Model.ImagePornDebugInfo.t), default: nil) - DebugInfo stores debug information from the overall classifier. This allows for instance to update counters related to blacklisting without running the full classifier again.
  • finalOffensiveScore (type: number(), default: nil) - Final offensive score based on image salient terms and image OCR vulgar and offensive scores.
  • finalViolenceScore (type: number(), default: nil) - Final violence score based on some image signals (brain pixel score, co-clicked images violence score, navboost queries score, etc.).
  • finalViolenceScoreVersion (type: String.t, default: nil) - A string that indicates the version of SafeSearch classifier used to compute final_violence_score.
  • internalSignals (type: GoogleApi.ContentWarehouse.V1.Model.SafesearchInternalImageSignals.t, default: nil) - A proto that stores SafeSearch internal signals that are not exported to clients. SafeSearch team does not provide any guarantees about the presence or the semantics of these signals in the future.
  • numberFaces (type: integer(), default: nil) - number of faces
  • ocrAnnotation (type: GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOCRAnnotation.t, default: nil) - Information about image OCR text. For details see image/safesearch/content/public/ocr_annotation.proto.
  • offensiveSymbolDetection (type: GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOffensiveSymbolDetection.t, default: nil) - QuimbyCongas-based detection of offensive symbols in the image (currently swastika and Nazi yellow badge).
  • photodnaHash (type: String.t, default: nil) - Binary version of the PhotoDNA hash (144 bytes long). If not set (has_photodna_hash() == false) it means that it was not computed, if empty (has_photodna_hash() == true && photodna_hash() == "") it means that the computation failed (cannot be computed for images smaller than 50 x 50).
  • pornWithHighConfidence (type: boolean(), default: nil) - This field is set to true when we are pretty confident that the image is porn (with higher precision than the img_porn_moderate restrict). In particular, it means that the image might be demoted for non-porn queries when SafeSearch is Off.
  • qbstOffensiveScore (type: number(), default: nil) - QBST-based image offensive score, Navboost based
  • qbstSpoofScore (type: number(), default: nil) - QBST-based image spoof score, Navboost based, unrelated to the pixel-based score in PornAnnotation.
  • queryStats (type: GoogleApi.ContentWarehouse.V1.Model.ClassifierPornQueryStats.t, default: nil) - Query statistics from Navboost logs. For more details see classifier/porn/proto/image_porn_classifier_signals.proto.
  • queryTextViolenceScore (type: number(), default: nil) - Aggregated navboost query violence score.
  • referer (type: String.t, default: nil) - url of the referer page
  • referrerCounts (type: GoogleApi.ContentWarehouse.V1.Model.ClassifierPornReferrerCounts.t, default: nil) - Information about referrers and their porn classification. For details see classifier/porn/proto/image_porn_classifier_signals.proto.
  • semanticSexualizationScore (type: number(), default: nil) - Starburst-based score predicting sexualization level of the image.
  • url (type: String.t, default: nil) - url of the image

Summary

Types

t()

Functions

decode(value, options)

Unwrap a decoded JSON object into its complex fields.

Types

Link to this type

t()

@type t() :: %GoogleApi.ContentWarehouse.V1.Model.PornFlagData{
  debugInfo: [GoogleApi.ContentWarehouse.V1.Model.ImagePornDebugInfo.t()] | nil,
  finalOffensiveScore: number() | nil,
  finalViolenceScore: number() | nil,
  finalViolenceScoreVersion: String.t() | nil,
  internalSignals:
    GoogleApi.ContentWarehouse.V1.Model.SafesearchInternalImageSignals.t() | nil,
  numberFaces: integer() | nil,
  ocrAnnotation:
    GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOCRAnnotation.t()
    | nil,
  offensiveSymbolDetection:
    GoogleApi.ContentWarehouse.V1.Model.ImageSafesearchContentOffensiveSymbolDetection.t()
    | nil,
  photodnaHash: String.t() | nil,
  pornWithHighConfidence: boolean() | nil,
  qbstOffensiveScore: number() | nil,
  qbstSpoofScore: number() | nil,
  queryStats:
    GoogleApi.ContentWarehouse.V1.Model.ClassifierPornQueryStats.t() | nil,
  queryTextViolenceScore: number() | nil,
  referer: String.t() | nil,
  referrerCounts:
    GoogleApi.ContentWarehouse.V1.Model.ClassifierPornReferrerCounts.t() | nil,
  semanticSexualizationScore: number() | nil,
  url: String.t() | nil
}

Functions

Link to this function

decode(value, options)

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.