While experimenting with computer vision, I came across YOLOv11, a library of pre-trained AI detection models, which sparked my interest in training a detection model of my own. This experiment aimed to deepen my understanding of how datasets are used in training AI detection models and to explore the inherent biases present in these datasets. It also served as a practical exercise to familiarize myself with the model training process. My exploration was influenced by ongoing discussions around data governance and the pervasive nature of surveillance, highlighting the ethical considerations tied to AI detection technologies.
Program used: ChatGPT, Python, OpenCV, Yolov11
The code mainly integrates the Yolov11 with OpenCV and utilizes the existing trained models to detect objects. By detecting a certain object, I can set parameters to affect the footage displayed real-time.
The experiment began with a straightforward task: detecting a "bottle." Once detected, a mosaic effect was applied over the bottle. Inspired by previous works by other designers who developed clothing to evade data surveillance, this experiment aimed to spark conversations about surveillance and data collection. By focusing on an ordinary, familiar object like a bottle, the project sought to provoke thought about the pervasive presence of AI in our lives and its implications for privacy.
To conclude the experiment, I gained valuable insights into the inner workings of AI detection systems, particularly how datasets can be easily manipulated by developers or creators. The process of labeling and training models demonstrates how accessible it is to modify existing detection models, as I did using publicly available knowledge and resources. Additionally, with modern AI tools capable of assisting with coding even for those without prior experience, the potential for customizing and repurposing such models becomes increasingly straightforward.