This Utility API uses AI to safeguard content by analyzing textual content against safety attributes like toxicity, violence, and other sensitive topics.
This API categorizes content into harmful categories such as derogatory language, violent scenarios, sexual references, insults, profanity, and more. Confidence scores, provide insights into the likelihood of content belonging to specific categories. This API is essential for maintaining a safe online environment, ensuring that user-generated text adheres to community guidelines and content standards. It provides a quick and reliable solution for text moderation, minimizing the exposure of undesirable content and fostering a positive user experience.
The atoms cost is subjected to change depending on the size of the input file and the provider selected. The list of providers and the atoms cost for each provider is given below:
Provider (requested_service) | Atoms |
---|
Google | 500 |