AmberCutie's Forum
An adult community for cam models and members to discuss all the things!

ChaturSafe

  • ** WARNING - ACF CONTAINS ADULT CONTENT **
    Only persons aged 18 or over may read or post to the forums, without regard to whether an adult actually owns the registration or parental/guardian permission. AmberCutie's Forum (ACF) is for use by adults only and contains adult content. By continuing to use this site you are confirming that you are at least 18 years of age.

Vixxen81

Cam Model
Nov 3, 2022
8,404
19
10,813
161
There are holes in the system. Someone was able to ask me if I have children and when I checked it in the review panel it said the word children was allowed.

Please report any other disfunctions with the system here. I have two backups to protect myself but CB was basically like NSFW doesn't mean you're not safe at work...this is probably the best way to improve the system.
 
  • Helpful!
Reactions: stormythunder
lol I just logged off and didn't even notice it. I'll have to check it out when there is no one waiting in my room anymore.
 
  • Like
Reactions: Vixxen81
Update:

There is a show blocked message log and a notification send option.

The model sees the log and the viewer sees the blocked notice message but the model does not see the blocked message notification.

There is a way to confirm or reject the message as violating policy or personal standards, I guess. It's very black and white with little nuance that I can tell.
 
If I understand it correctly, there are two levels to ChaturSafe:

The first level is the moderation filters that you can enable or disable entirely, and if enabled you can select the level for each filter. If a message is caught by these filters it goes into the log and you can choose to accept or reject that it was caught appropriately. If there isn't, it would be nice to see a realtime alert to broadcasters otherwise it becomes its own thing to have to review the log after each stream.

The second level is the Trust & Safety filters which cannot be disabled and which only notify the user if their message is rejected. Rejected messages are not logged because broadcasters can't choose to accept or reject them since the filters are entirely the purview of CB.
 
If I understand it correctly, there are two levels to ChaturSafe:

The first level is the moderation filters that you can enable or disable entirely, and if enabled you can select the level for each filter. If a message is caught by these filters it goes into the log and you can choose to accept or reject that it was caught appropriately. If there isn't, it would be nice to see a realtime alert to broadcasters otherwise it becomes its own thing to have to review the log after each stream.

The second level is the Trust & Safety filters which cannot be disabled and which only notify the user if their message is rejected. Rejected messages are not logged because broadcasters can't choose to accept or reject them since the filters are entirely the purview of CB.
You're correct on the first part, but not the second paragraph. The screen shots it would take to prove that only the first part is true would be obnoxious to post. Suffice it to say I have analyzed every available option and tested it with the preview screen that I prefer my own filter app over Chatursafe at this time.

There are breakdowns of categories to filter and some of them are kind of absurd to a degree. Like if you filtered it hard someone would not be able to say their dog died and they are sad. I haven't seen any thing that says Trust and Safety is a thing or a thing that can't be disabled. If you want to provide screen shots of that, please do.
 
  • Helpful!
Reactions: stormythunder
From https://chaturcommunity.com/news/new-feature-chatursafe:
Platform-Level Trust and Safety Protection

In addition to your customizable filters, Trust and Safety filtering is enabled in all rooms by default. This system automatically screens messages for harmful, illegal, or policy-violating content in accordance with platform guidelines, including doxxing, poaching, and other forms of abuse. These protections cannot be disabled and will soon extend to uploaded emoticons.

These safeguards are designed to help protect you, your community, and your supporters, including preventing attempts to redirect or solicit your tippers within your room.

If a message is blocked under Trust and Safety rules, the sender will be notified.

My conversations with CB during the beta period for ChaturSafe were primarily focused on including anti-spam filters, which was added under Trust & Safety. It is permanently on, i.e., broadcasters can't enable or disable it.
 
From https://chaturcommunity.com/news/new-feature-chatursafe:


My conversations with CB during the beta period for ChaturSafe were primarily focused on including anti-spam filters, which was added under Trust & Safety. It is permanently on, i.e., broadcasters can't enable or disable it.
So that's just the already in place AI that creates bans.

But hey if you're so close to the dev team that you have that level of input, maybe tell them that they need to link models to that page instead of some bullshit twitter feed that doesn't explain a damn thing.
 
It's not. It's a new filter added with ChaturSafe.
Okay, no I believe you, I'm just frustrated by how things are explained in general. You get one thing, I get another. I'm a broadcaster for over 13 years, I should have thing spelled out to me in a very detailed manner and the only notification I got was "it'll be here soon" and "here's a [busted] demo on X" and then having to check the settings tab to enable it.

I'm not really happy with all of the holes this system has.
 
Okay, no I believe you, I'm just frustrated by how things are explained in general. You get one thing, I get another. I'm a broadcaster for over 13 years, I should have thing spelled out to me in a very detailed manner and the only notification I got was "it'll be here soon" and "here's a [busted] demo on X" and then having to check the settings tab to enable it.

I'm not really happy with all of the holes this system has.
I agree with you. Examples for each filter at low, medium and high settings would be very helpful, as well as for the Trust & Safety filter.

Speaking from my experience, it doesn't really hit the mark for models I know, most of whom are Russian or Eastern European. Phishing spam was the biggest issue by far for almost all the models I talked to about it and CB only added an anti-spam filter late in the development process.
 
Last edited:
  • Like
Reactions: Vixxen81 and NotYou
I agree with you. Examples for each filter at low, medium and high settings would be very helpful, as well as for the Trust & Safety filter.

Speaking from my experience, it doesn't really hit the mark for models I know, most of whom are Russian or Eastern European. Phishing spam was the biggest issue by far for almost all the models I talked to about it and CB only added an anti-spam filter late in the development process.
I had to test strings of words/phrases in the review window for each level to figure out where it gets cut off.

Is this Trust & Safety filter ONLY enabled when using Chatursafe or is that something running at all times now?
 
But hey if you're so close to the dev team that you have that level of input, maybe tell them that they need to link models to that page instead of some bullshit twitter feed that doesn't explain a damn thing.
Just to be clear, this was part of the official announcement, not something I was given because of the beta program. It's also linked from the X post.

Screenshot 2026-03-24 at 3.54.33 PM.png
 
Is this Trust & Safety filter ONLY enabled when using Chatursafe or is that something running at all times now?
Technically ChaturSafe is running all the time. The only thing that broadcasters can disable is the moderation filters; they can't disable the Trust & Safety filter.
 
Just to be clear, this was part of the official announcement, not something I was given because of the beta program. It's also linked from the X post.

View attachment 105834
That link doesn't load for me on any browser I run, that's why I didn't know because I can't see this for some reason. I don't have anything blocked or moderated on my browsers.
 
  • Helpful!
Reactions: stormythunder
That link doesn't load for me on any browser I run, that's why I didn't know because I can't see this for some reason. I don't have anything blocked or moderated on my browsers.
I mostly use Firefox and sometimes Chrome if Firefox is taking too long to load CDN content.
 
  • Helpful!
Reactions: Vixxen81
I mostly use Firefox and sometimes Chrome if Firefox is taking too long to load CDN content.
Okay, got it to load on FF. Sheesh. Thanks for your help!

I'm curious about how the anti-poaching thing will go down since almost all of that happens in DM/PM (for my room).
 
  • Like
Reactions: smoker919
Okay, got it to load on FF. Sheesh. Thanks for your help!

I'm curious about how the anti-poaching thing will go down since almost all of that happens in DM/PM (for my room).
The first time I got banned from CB (!) was when I was developing my first anti-spam app. A model who had been beta testing my apps asked me what kind of messages would be blocked so I sent her some examples in our PM and after the third one I was banned. It was reversed with two days once an actual human reviewed it.

But what it means is that CB has been capable of monitoring this and auto-banning users for at least 3 years but hasn't applied this capability to the actual spammers. I hope that instead of just blocking specific messages that it will also ban the spambot accounts after some number of spam messages. Either way, though, the spambot creators will just continue to mutate the messages to get around the Trust & Safety filter and CB will be in the same arms race that developers are in now. But really it's an infrastructure issue and CB should be leading this fight instead of app developers.
 
  • Like
Reactions: Vixxen81
The first time I got banned from CB (!) was when I was developing my first anti-spam app. A model who had been beta testing my apps asked me what kind of messages would be blocked so I sent her some examples in our PM and after the third one I was banned. It was reversed with two days once an actual human reviewed it.

But what it means is that CB has been capable of monitoring this and auto-banning users for at least 3 years but hasn't applied this capability to the actual spammers. I hope that instead of just blocking specific messages that it will also ban the spambot accounts after some number of spam messages. Either way, though, the spambot creators will just continue to mutate the messages to get around the Trust & Safety filter and CB will be in the same arms race that developers are in now. But really it's an infrastructure issue and CB should be leading this fight instead of app developers.
Oh yeah I've always known they can see and moderate everything at their discretion. I did ask someone if they had received any DMs after tipping me since the rollout and the answer was a solid "no". So, I guess the poaching part has been solved but I'm going to ask other people, too.
 
Oh yeah I've always known they can see and moderate everything at their discretion. I did ask someone if they had received any DMs after tipping me since the rollout and the answer was a solid "no". So, I guess the poaching part has been solved but I'm going to ask other people, too.
I received a DM after a big tip yesterday so it's not entirely gone 🫤
 
  • Sorry to hear that.
Reactions: Vixxen81
I received a DM after a big tip yesterday so it's not entirely gone 🫤
That sucks.

I did email support three days ago and I asked them if it was okay if I put up a chat notifier that says "If people are DMing you, please ignore them and report them" and they said I was free to moderate the room any way I choose.
 
  • Like
Reactions: smoker919