AbuseiamVerdict

AI Overview😉

  • The potential purpose of this module is to determine whether a piece of content (e.g. a webpage, an image, etc.) is abusive or not, and to generate a verdict based on that evaluation. This verdict can then be used to take action against the content, such as removing it or restricting access to it.
  • This module could impact search results by influencing the ranking of websites that contain abusive content. If a website is deemed to have abusive content, it may be demoted in search results or even removed entirely. This could lead to a safer and more trustworthy search experience for users.
  • A website may change things to be more favorable for this function by ensuring that their content is compliant with Google's policies and guidelines, and by implementing measures to prevent abusive content from being uploaded or shared on their platform. This could include using machine learning algorithms to detect and remove abusive content, as well as implementing reporting mechanisms for users to report suspicious content.

Interesting Module? Vote 👇

Voting helps other researchers find interesting modules.

Current Votes: 0

GoogleApi.ContentWarehouse.V1.Model.AbuseiamVerdict (google_api_content_warehouse v0.4.0)

Verdict against a target. AbuseIAm generates a verdict based on evaluations. AbuseIAm can send such verdicts to clients for enforcement.

Attributes

  • client (type: GoogleApi.ContentWarehouse.V1.Model.AbuseiamClient.t, default: nil) - Target client of the verdict. It can be used to differentiate verdicts from multiple clients when such verdicts are processed in one common place.
  • comment (type: String.t, default: nil) - Additional info regarding the verdict.
  • decision (type: String.t, default: nil) -
  • durationMins (type: integer(), default: nil) - Time duration (in minutes) of the verdict.
  • evaluation (type: list(GoogleApi.ContentWarehouse.V1.Model.AbuseiamEvaluation.t), default: nil) - Evaluations relevant to this verdict. Every Verdict should contain at least one Evaluation.
  • hashes (type: list(GoogleApi.ContentWarehouse.V1.Model.AbuseiamHash.t), default: nil) - Details of all the hashes that can be computed on a message, such as simhash and attachment hash
  • isLegalIssued (type: boolean(), default: nil) - Is this verdict issued by legal?
  • miscScores (type: list(GoogleApi.ContentWarehouse.V1.Model.AbuseiamNameValuePair.t), default: nil) - This field is used to pass relevant / necessary scores to our clients. For eg: ASBE propagates these scores to moonshine.
  • reasonCode (type: String.t, default: nil) - A short description of the reason why the verdict decision is made.
  • region (type: list(GoogleApi.ContentWarehouse.V1.Model.AbuseiamRegion.t), default: nil) - The regions in which this verdict should be enforced. Absence of this field indicates that the verdict is applicable everywhere.
  • restriction (type: list(GoogleApi.ContentWarehouse.V1.Model.AbuseiamVerdictRestriction.t), default: nil) - Restrictions on where this verdict applies. If any restriction is met, the verdict is applied there. If no restrictions are present, the verdict is considered global.
  • strikeCategory (type: String.t, default: nil) - Category of the strike if this is a strike verdict.
  • target (type: GoogleApi.ContentWarehouse.V1.Model.AbuseiamTarget.t, default: nil) -
  • targetTimestampMicros (type: String.t, default: nil) - The timestamp of the target. E.g., the time when the target was updated.
  • timestampMicros (type: String.t, default: nil) - When the verdict is generated
  • userNotification (type: list(GoogleApi.ContentWarehouse.V1.Model.AbuseiamUserNotification.t), default: nil) - Extra notification(s) to be delivered to target user or message owner about the verdict.
  • version (type: String.t, default: nil) - version of decision script

Summary

Types

t()

Functions

decode(value, options)

Unwrap a decoded JSON object into its complex fields.

Types

Link to this type

t()

@type t() :: %GoogleApi.ContentWarehouse.V1.Model.AbuseiamVerdict{
  client: GoogleApi.ContentWarehouse.V1.Model.AbuseiamClient.t() | nil,
  comment: String.t() | nil,
  decision: String.t() | nil,
  durationMins: integer() | nil,
  evaluation:
    [GoogleApi.ContentWarehouse.V1.Model.AbuseiamEvaluation.t()] | nil,
  hashes: [GoogleApi.ContentWarehouse.V1.Model.AbuseiamHash.t()] | nil,
  isLegalIssued: boolean() | nil,
  miscScores:
    [GoogleApi.ContentWarehouse.V1.Model.AbuseiamNameValuePair.t()] | nil,
  reasonCode: String.t() | nil,
  region: [GoogleApi.ContentWarehouse.V1.Model.AbuseiamRegion.t()] | nil,
  restriction:
    [GoogleApi.ContentWarehouse.V1.Model.AbuseiamVerdictRestriction.t()] | nil,
  strikeCategory: String.t() | nil,
  target: GoogleApi.ContentWarehouse.V1.Model.AbuseiamTarget.t() | nil,
  targetTimestampMicros: String.t() | nil,
  timestampMicros: String.t() | nil,
  userNotification:
    [GoogleApi.ContentWarehouse.V1.Model.AbuseiamUserNotification.t()] | nil,
  version: String.t() | nil
}

Functions

Link to this function

decode(value, options)

@spec decode(struct(), keyword()) :: struct()

Unwrap a decoded JSON object into its complex fields.