Life
USA | Aug 18, 2023

WATCH | Open AI promos GPT-4 as software for creating policy and content moderation

Shemar-Leslie Louisy

Shemar-Leslie Louisy / Our Today

Reading Time: 2 minutes
A keyboard is placed in front of a displayed OpenAI (File Photo: REUTERS/Dado Ruvic/Illustration)

OpenAI is advocating for the use of its GPT-4, the successor to the learning language model (LLM) used to create Chat-GPT, for content moderation across digital platforms.

In a post on its blog site on August 15, the company highlighted that using the model results in much faster iteration on policy changes, reducing the cycle from months to hours, provides more consistent labelling, and reduces the emotional toll on human moderators.

Open AI also pointed out the reduction of burden on human moderators and the creation of a more streamlined process, stating: “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labelling. We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators. “

We’re exploring the use of LLMs to address these challenges. Our large language models like GPT-4 can understand and generate natural language, making them applicable to content moderation. The models can make moderation judgments based on policy guidelines provided to them.

Open AI

Traditionally, human moderators are required to constantly sort through large amounts of data to filter out toxic, harmful, or banned material from digital platforms, a process that is inherently slow, intensive, and nuanced. Open AI is insisting that LLMs are now well suited to taking over this task.

In handling substantial volumes of data, GPT-4’s can be employed to fine-tune a smaller more localised model, to optimise the moderation process and thus reduce the need for human personnel.

OpenAI logo is seen in this illustration (File Photo: REUTERS/Dado Ruvic/Illustration)

Once a policy guideline is penned, experts then train the model accordingly through the data, effectively teaching it how to interpret the policy. As discrepancies appear between GPT-4’s judgements and human interpreters of the policy such as lawyers and other policy experts, this can serve as the basis for dialogue and further dialogue. The model can then be prompted to explain its reasoning, enabling experts to refine policy definitions, eliminate ambiguities, and enhance clarity iteratively.

This approach, distinct from Constitutional AI, prioritises platform-specific policy iteration, making it ideal for a variety of digital platforms. OpenAI invites trust & safety practitioners to embrace this innovative methodology, offering accessibility to the approach through OpenAI API access.

Platform-specific policy development with AI, with its ability to provide justification in its decision-making process, if utilised and trained correctly, could expand into policy development for businesses to ensure adherence to company culture and guidelines without undue bias, institutions that are susceptible to breakdowns in the communication chain regarding policy updates due to human error such as government institutions, could become more streamlined.

Check out the video below:

Video: Twitter @OpenAI

Comments

What To Read Next