Updated: 29th November 2023

At Fanvue, we believe in providing a safe and secure platform for our users. We are committed to upholding high standards of content moderation and protection. This policy outlines the measures we take to ensure the safety and security of our users.

Keeping Users Safe

We take the safety of our users seriously. To ensure that our users are protected from harmful content, we have put in place guidelines on the type of content that can be uploaded on our platform.

Users first entering the site are presented with an “18+ confirmation” pop-up box prompting the user to confirm if their current age is below or above 18 years old. If the user declares they’re below the age restriction, the user is redirected to the homepage and not able to continue further.

Any users who are found to have been untruthful and are in fact below the age of 18 are removed immediately. Their email address is blacklisted from the site.

We also have a reporting system that allows users to report any inappropriate content that they come across. We strictly prohibit any content that promotes hate, violence, discrimination, or harassment. For further information on complaints and reporting, please see our complaints policy here.

All user content that is not behind a paywall (such as profile pictures, banner images, and intro videos) is subjected to automated nudity checks using Hive AI content moderation (3rd Party). Content that appears sensitive is automatically blurred, requiring a user to click on “display sensitive content”.

KYC and Age Verification Checks

In order to ensure that our platform is safe for all users, we require all users wishing to become creators to undergo a Know Your Customer (KYC) and age verification process via Ondato (3rd Party). This is coupled with a liveness check where the customer must be present for a short video recording/photograph at the time of the application. This process helps us to verify the identity and age of our users, and to ensure that only those who meet our age requirements are allowed to use our platform. We also require all our users to provide accurate and up-to-date information during the registration process, and we regularly review our user database to ensure that all user information is accurate and complete.

Manual checks take place frequently of passed creators, their KYC ID documentation, and associated content. All creators are manually reviewed prior to their first withdrawal on Fanvue, ensuring that the content matches the ID documents on file and also runs the account through a number of pattern checks to identify any suspicious behavior. ID’s are stored in a secure system with Ondato to ensure that future checks can take place. We retain the ability to reference these ID’s on our internal backend admin panel.

Manual checks take place on all creator who earns over $100 in any given day. This includes a direct outreach and review of the content and ID match.

If any evidence discovered, either by self-discovery or user-submitted evidence, finds a creator is not who they say they are, and/or they do not have copywrite permission to a significant quantity of their uploaded content, then Fanvue retains the right to block, disable or delete the creator’s media and/or account.

Upon application to become a creator, a user is asked to declare if they intend to upload explicit content. This flag cannot be changed by the user once set and it is noted by our internal moderation if they are required to investigate the user’s account.

Moderation of Uploaded Content

All uplaoded content goes through multiple checks, including an AI auto-moderation process with out moderation provider HIVE. This quickly and effectively checks content for weapons, drugs, signs of abuse, hand signals, writting (digital and hand written), people counting, as well as an age categorisation check which assigns an age range to featured individuals based on facial features and markings.

We have a team of experienced moderators who review all content that is flagged/deleted by HIVE. Our team also spot check creators accounts randomly and instructionally - random checks are entirely random to remove any potential bias, instructional checks are dictated by algorithms that flag when users and or creators hit thresholds or fall into patterns/behaviours. Any content that violates our policies is removed immediately. Our moderators are trained to identify and remove any content that is harmful, abusive, or offensive.

Content Protection Features

We have implemented several content protection features to prevent unauthorized sharing and distribution of user-generated content. We also have a system in place to detect and take action against any attempts to hack or steal user-generated content. We regularly review and update our content protection measures to ensure that our users' content is always secure and protected.

Partnerships for Content Protection