SafesearchVideoContentSignals

AI Overview😉

  • The potential purpose of this module is to classify and score video content based on its safety and suitability for different audiences. It appears to be a part of Google's SafeSearch feature, which aims to filter out inappropriate or offensive content from search results.
  • This module could impact search results by influencing the ranking and visibility of videos that contain potentially harmful or offensive content. Videos that are classified as "abuse with high confidence" may be demoted or removed from search results, while videos with lower safety scores may be ranked lower or flagged for review.
  • To be more favorable for this function, a website may consider the following strategies:
    • Ensure that video content is appropriate and safe for all audiences.
    • Provide clear and accurate metadata and labeling for video content.
    • Avoid using misleading or sensational titles, descriptions, or thumbnails that may trigger SafeSearch filters.
    • Consider using content moderation tools or services to review and flag potentially harmful content.

Interesting Module? Vote 👇

Voting helps other researchers find interesting modules.

Current Votes: 0

GoogleApi.ContentWarehouse.V1.Model.SafesearchVideoContentSignals (google_api_content_warehouse v0.4.0)

SafeSearch video content classification scores are computed based on go/golden7 video features. To access these scores see the library at: google3/quality/safesearch/video/api/video_score_info.h Next ID: 6

Attributes

  • internalMultiLabelClassification (type: GoogleApi.ContentWarehouse.V1.Model.SafesearchVideoContentSignalsMultiLabelClassificationInfo.t, default: nil) -
  • isAbuseWithHighConfidence (type: boolean(), default: nil) - This is used by Amarna to determine whether it should notify Raffia for immediate reprocessing. This field will be generated in Amarna's image_metadata corpus and exported to references_video_search corpus and written to ExportState.module_state.critical_metadata_checksum for determining whether Amarna should immediately notify Raffia whenever is_abuse_with_high_confidence's value changes.
  • scores (type: map(), default: nil) -
  • versionTag (type: String.t, default: nil) -
  • videoClassifierOutput (type: GoogleApi.ContentWarehouse.V1.Model.SafesearchVideoClassifierOutput.t, default: nil) - Output of all SafeSearch video classifiers in Amarna.

Summary

Types

t()

Functions

decode(value, options)

Unwrap a decoded JSON object into its complex fields.

Types

Link to this type

t()

@type t() :: %GoogleApi.ContentWarehouse.V1.Model.SafesearchVideoContentSignals{
  internalMultiLabelClassification:
    GoogleApi.ContentWarehouse.V1.Model.SafesearchVideoContentSignalsMultiLabelClassificationInfo.t()
    | nil,
  isAbuseWithHighConfidence: boolean() | nil,
  scores: map() | nil,
  versionTag: String.t() | nil,
  videoClassifierOutput:
    GoogleApi.ContentWarehouse.V1.Model.SafesearchVideoClassifierOutput.t()
    | nil
}

Functions

Link to this function

decode(value, options)

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.