Tech giant Apple recently took action against applications on its App Store that were using artificial intelligence to generate intimate images of individuals without consent. As per reports, at least three such apps were removed from the virtual storefront after they were found to be offering questionable services.
It seems these apps had somehow bypassed the company's stringent review processes until an online publication brought them to Apple's notice. Within a week of being alerted, the Cupertino-based company pulled the plug on the applications.
While the exact functioning of the apps is still unclear, they were purportedly marketing their services as tools to remove clothes from regular photos or enable face swapping on intimate pictures through the use of AI techniques. Such capabilities can obviously be misused to generate non-consensual imagery harming people's privacy and dignity.
In recent times, the proliferation of deepfake technologies has emerged as a major concern. Several incidents have highlighted how such algorithms can be exploited to manipulate media. Though Apple and other app market operators have tried regulating them, some dodgy applications seem to be finding ways around the monitoring. This underscores the need for continued vigilance in this space.
Going forward, tighter scrutiny may be required to ensure only legitimate AI tools that respect user safety and ethics find placement on virtual stores. As technology keeps advancing, the responsibilities of gatekeepers will also need recalibration to balance innovation with preventing harm.