Content Policy for Gigsterr
Effective Date: 16 September 2024
Introduction
Welcome to Gigsterr. To ensure a safe and compliant experience for all users, we have established the following content policies. These guidelines apply to all content posted by Gigsterrs and Gigsterr Providers within the app.
Prohibited Content
a. Illegal Goods and Services
Prohibited: Jobs or tasks involving the sale, distribution, or use of illegal goods or services (e.g., drugs, stolen property, counterfeit items), or those that promote illegal activities.
Action: Immediate removal of the job/task listing. Persistent violations may result in account suspension or termination.
b. Explicit or Adult Content
Prohibited: Job descriptions or tasks containing pornography, explicit violence, or any adult content.
Action: Removal of the job/task listing. Accounts may be suspended for repeated offenses.
c. Hate Speech and Harassment
Prohibited: Content that promotes hate speech, discrimination, or harassment based on race, ethnicity, religion, gender, sexual orientation, disability, or any other protected category.
Action: Immediate removal of the job/task listing. Accounts involved in hate speech or harassment may face suspension or permanent ban.
d. Misinformation and Fraudulent Content
Prohibited: False information, fraudulent job offers, or deceptive tasks intended to mislead or defraud users.
Action: Removal of the job/task listing. Possible suspension or termination of the involved account.
e. Dangerous or Self-Harming Content
Prohibited: Jobs or tasks that encourage or involve dangerous activities, self-harm, or any behavior that could cause harm to individuals.
Action: Removal of the job/task listing. Support and resources may be provided to affected users, and accounts may be suspended or banned.
f. Inappropriate Content
Prohibited: Content that is not suitable for all audiences, including excessive profanity, graphic violence, or content that could be considered offensive.
Action: Removal of the job/task listing. Accounts may be suspended for repeated offenses.
Content Moderation
a. Automated Systems
We employ AI-driven tools to monitor and filter content to ensure compliance with these policies.
These tools are regularly updated to improve their effectiveness.
b. Human Moderators
Our trained moderation team reviews flagged content to ensure adherence to our policies.
Moderators follow detailed guidelines and undergo regular training.
c. User Reporting
Users can report inappropriate or policy-violating content via our in-app reporting feature.
Reports are reviewed promptly, and appropriate action is taken based on our guidelines.
Enforcement & Compliance
a. Guidelines and Consequences
Clear guidelines are provided within the app regarding prohibited content.
Violations may result in warnings, temporary suspension, or permanent ban, depending on the severity and frequency of the violation.
b. Regular Audits
We conduct regular audits of our content moderation processes and policies to ensure they meet compliance standards.
Policies are updated as needed to reflect changes in regulations and best practices.
c. User Education
Users are informed about our content policies and reporting mechanisms through onboarding materials and periodic updates.
Resources are available to help users understand and adhere to community standards.
Review & Updates
a. Policy Updates
This policy is reviewed and updated regularly to align with legal requirements, App Store guidelines, and community feedback.
Users are notified of significant changes to the policy.
b. Feedback Mechanism
Users are encouraged to provide feedback on our content policies and moderation practices.
Feedback is reviewed to make continuous improvements.
Contact Us
For questions or concerns regarding this Content Policy, please contact our support team at support@gigsterr.app.
Compliance with Apple's Guidelines
This policy is designed to ensure compliance with Apple’s App Store Review Guidelines, particularly regarding:
Content Restrictions (Guideline 1.1): Prohibiting illegal, explicit, and harmful content.
User Safety (Guideline 1.2): Implementing measures to protect users from harassment and misinformation.
Content Moderation (Guideline 1.2 and 5.1): Using both automated and human moderation to enforce content standards.