In the digital age, images travel faster than footsteps. A single picture, shared across screens and timelines, can move through countless conversations before anyone pauses to ask where it came from. Sometimes it informs. Sometimes it inspires. And occasionally, it unsettles—raising questions not only about what we see, but about who created it and why.
Recently, such questions surfaced after an artificial intelligence–generated image began circulating online. The image portrayed a disturbing scene involving the acid attack against an activist associated with the Commission for the Disappeared and Victims of Violence, widely known as KontraS. Though the image was not a real photograph, its visual suggestion of violence quickly stirred concern among observers and advocates.
In response, the legal team connected to the activist called for authorities to trace the creator of the AI-generated image. Their request reflects a growing awareness of how digital tools—particularly generative AI—can be used not only to illustrate ideas but also to produce images that resemble real events in unsettling ways.
For those familiar with Indonesia’s history of human rights advocacy, KontraS has long stood as a voice for victims of past abuses and unresolved cases. Activists associated with the organization frequently engage in legal advocacy, research, and public campaigns related to justice and accountability. Within such a context, the appearance of a provocative AI-generated depiction tied to an activist inevitably raises concerns about potential intimidation or misinformation.
Members of the legal team expressed the hope that investigators would identify the individual responsible for creating and distributing the image. Their argument is grounded not simply in the existence of the image itself, but in the potential implications it carries—particularly if it contributes to fear, harassment, or the distortion of sensitive incidents.
Artificial intelligence technologies have advanced rapidly in recent years, allowing users to generate highly realistic visual scenes using only text prompts. While these tools can be valuable for creative and educational purposes, experts often note that they also introduce new ethical challenges. Images that appear authentic can circulate widely before viewers recognize that they are digitally generated.
In public spaces shaped by social media, the boundary between illustration and documentation can blur easily. A fabricated image, even if labeled as artificial, may still evoke emotional reactions or create confusion if detached from context.
Indonesia, like many countries, continues to navigate how existing legal frameworks apply to the evolving landscape of digital content. Authorities sometimes examine whether the creation or distribution of certain materials could fall under laws related to electronic information, defamation, intimidation, or public disorder.
For the legal team representing the activist, the request to trace the image’s creator is part of a broader effort to ensure accountability in online spaces. Their call reflects a belief that digital expression, like any form of communication, carries responsibility for its impact.
As discussions about artificial intelligence grow around the world, cases like this illustrate how technological innovation intersects with questions of law, ethics, and public trust. Each new tool expands the possibilities of creativity—but also invites societies to reconsider how truth, representation, and accountability should be safeguarded.
Authorities have not yet announced further developments regarding the request to trace the creator of the AI-generated image. For now, the matter remains under public discussion as legal representatives continue to urge an investigation into the origins of the circulating visual.
AI Image Disclaimer Graphics are AI-generated and intended for representation, not reality.
Source Check Credible coverage of the issue appears in several Indonesian mainstream and reputable media outlets:
Kompas CNN Indonesia Tempo ANTARA Detik

