Project Final Idea


I have refined and come up two project ideas that have been approved by my lecturer.

AGE OF SMART SURVEILLANCE: I plan to build an object detector that specifically highlights these devices to bring attention to the everyday devices that store personal data. Initially, I considered using Google Teachable Machine to train the model myself. However, after consulting my lecturer, I realized that this approach would limit detection to specific locations, as the background also plays a role in detecting objects. She suggested using a pre-trained dataset, specifically the COCO-SSD object detection model, which can be imported into p5.js. This model has already been trained through many images of the same object so it will be recognized in most backgrounds.

My first step will be to code a program that captures live content and imports the COCO-SSD dataset. The program will then highlight storage devices in some manner. Following this, I will need to test and iterate on the program, possibly adding features to display more information about the detected devices. All sources I go on to use will be referenced in documentation.

ORIGINALITY OF LIFE VR GALLERY: This project involves placing a mixture of AI-generated or AI-modified content alongside human-created content in a virtual gallery. Visitors will be tasked with differentiating between the two as they explore the gallery through a web interface or VR headset. This project aims to demonstrate how advancements in AI have made it difficult to distinguish artificial content from human-made content, with AI generating content more efficiently and quickly.

I will use various AI content creation programs and platforms to produce different media types. Spatial Rooms will serve as the virtual gallery space. The human-generated content will either come from my own work or be sourced online (with appropriate references).

I plan to have 12 pieces of media in the online gallery, 6 will be generated by AI and 6 will be by humans

2 video by human, 2 videos by AI of different nature: One AI video focused on scenery and landscape will be generated by Artbreeder(old version), it uses AI to blend and generate images, including landscapes and scenery, based on user input, for which I have selected royalty-free photos from their built-in library, I then added an audio track on Microsoft Clip Champ to match the human video as that one has sound. The other AI video is focused on realistic human avatar generation for which I have used Synthesia, a program that is known for its ability to produce high-quality videos featuring realistic avatars set against various scenic backgrounds, the video is in Spanish as English accent was a little flat, so to avoid easy distinguishment another language was used.

The human video was filmed by me at two exhibitions in London and Tokyo

2 images each: both human photos were taken by me on my iPhone, they have not gone through any editing. The AI images were generated by DALL.E-3, through Microsoft Copilot Designer on Bing Chat

1 piece of music each: For AI music I used AVIA, Artificial Intelligence Virtual Artist. The keyword I searched for for both human and AI music was ‘orchestra’. The human music track came from Freesounds.org, which link I will put in the reference sections of the documentation

1 artwork each: I chose the human painting first, it is called ‘Paso Clouds’, an impressionist-style painting by Erin Hanson. I then wrote a prompt on DALL.E-3, through Microsoft Copilot Designer on Bing Chat describing the scene(not mentioning her name, only the painting is in impressionist style) in the human painting to generate my AI counterpart


Leave a Reply

Your email address will not be published. Required fields are marked *