here’s my updated iteration -
This is a real-time video protocol for nodes of cameras to un-expose, see, and meet each other through narratives of the virtual private network, while making live images and sound simultaneously. Via a link to the site, each of the two addresses will be prompted to “allow use” of their cameras before entering. Only two camera addresses can use the site at a time for this beta version test—it is to explore the space between two nodes/addresses, in other words, to look at, to listen to, to examine, to float with one edge between two nodes/addresses. The two cameras will interact with each other by the user/performer showing up in front of the camera, dodging, or using one hand to block the camera lens as the narrative proceeds. The change of the videos’ exposure and brightness will affect the texture of the sound, which is the noise sonified from the interchanging pixels of the images. As the narrative approaches its temporary ending, the protocol will open itself up for further experimentation for the two camera nodes to float on.
I’d like to create a video work not only about but also with the building of its own protocol: in this case, it is to make videos in the protocol of a virtual private network as feelings, experiences, and art. Eventually, my work and research for Summer of Protocol’24 this time is also to think about making art with protocols that’s always already and among the discussions of technological advancement and protocol entrepreneurship .
Here’s the real-time video protocol site itself that you can try and play with -
if you had two camera inputs, you could potentially test it using two different browsers activating two different cameras… if not, feel free to try it with only one browser opening up the site with one camera. | and the code is here.
Here’s a screen-recording video test -
I “performed” in this “real-time video protocol” myself using two cameras as an example - you could see how the videos play out with the overlaying images as well as listen to how it sounds like with both the oscillating noise and the text-to-speech narrative.
Please follow this thread here to read more about my updated methods and reflections in making of this project.
Would be very grateful to hear any feedbacks, feelings, thoughts, questions… from you all! <3