AbuseiamAbuseType

AI Overview😉

  • The potential purpose of this module is to identify and categorize different types of abusive content, such as pornography, hate speech, or harassment, to help Google's algorithm filter out harmful or offensive results from search queries.
  • This module could impact search results by influencing the ranking of websites that contain abusive content, potentially demoting or removing them from search engine results pages (SERPs) to provide a safer and more respectful user experience.
  • A website may change things to be more favorable for this function by ensuring they have a clear and effective content moderation policy in place, removing or flagging abusive content, and providing a safe and respectful environment for users, which could potentially improve their search engine ranking and visibility.

Interesting Module? Vote 👇

Voting helps other researchers find interesting modules.

Current Votes: 8

GoogleApi.ContentWarehouse.V1.Model.AbuseiamAbuseType (google_api_content_warehouse v0.4.0)

Attributes

  • id (type: String.t, default: nil) -
  • subtype (type: String.t, default: nil) - Optional client specific subtype of abuse that is too specific to belong in the above enumeration. For example, some client may want to differentiate nudity from graphic sex, but both are PORNOGRAPHY.

Summary

Types

t()

Functions

decode(value, options)

Unwrap a decoded JSON object into its complex fields.

Types

Link to this type

t()

@type t() :: %GoogleApi.ContentWarehouse.V1.Model.AbuseiamAbuseType{
  id: String.t() | nil,
  subtype: String.t() | nil
}

Functions

Link to this function

decode(value, options)

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.