From 7338792c111723128958e180e5846d92302a4fb2 Mon Sep 17 00:00:00 2001 From: "kam (from the studio)" Date: Tue, 4 Oct 2022 14:24:43 +0200 Subject: [PATCH] ooc and grs --- projects/grs/documentation.md | 37 + projects/ooc-summer-session/documentation.md | 1362 +++++++++--------- 2 files changed, 718 insertions(+), 681 deletions(-) diff --git a/projects/grs/documentation.md b/projects/grs/documentation.md index 2fe0388..85359a8 100644 --- a/projects/grs/documentation.md +++ b/projects/grs/documentation.md @@ -218,3 +218,40 @@ The practical aspect depends on the second and third case studies. There are som - ways of being - james bridle - new dark age - james bridle - + +## Hackpact 2 + +The Padliography is a tool to keep track pad documents. It archives them in a dedicated page in the PZI wiki, through the joined efforts of the MediaWiki API and a Flask app on the Soupboat. + +It's as a piece of software made to be offered. + +At the moment, even though the project is working, its form is still not ready for the public. It lacks for entry points, and while it has a README.md file, it's not really generous and doesn't cover the basic info that it should document. + +The plan is to make it useful for and usable by others: + +- Dynamic wiki page +- Init wiki page +- Documentation on how to install an instance of the Padliography +- Generous readme + +### 1st Oct + +- Went to the market and bought so much vegetabels that I couldn't lift the ikea bag anymore. Happy with it +- Worked with Supi on the current state of the Workbook to prepare for the M&Ms session of monday. +- Doubts on the backend of the Workbook. + +### 2nd Oct + +- Looked into the text editor _vim_ and its tutorial `vimtutor`. It's a nice hands-on approach to documentation. +- Read a tutorial for setting up _Flask_ and _sqlite3_. Really a different experience from the _vim_'s one. Much more top-down. +- Branched the Workbook to implement a database system. The original prototype was based on yaml files and nested folders and after a while it was really a convoluted mess. Fractal and funny and ridiculus at some point, for sure painful everytime we had to add some new feature. Even though databases have this ancient technology aura we tried it out and it's way better. +- Maintenability it's critical when a project is meant to be shared. + +### 3rd Oct + +- Worked with Supi on the Workbook and presented a demo for the M&Ms session of 3rd Oct. Showcasing a demo it's the best way to understand what is clear and what's not in a project. It also offers different perspectives and alternative entry points. +- Follow-up with Gr and Supi about borders ecology, how a tool defines a community, who's inside and who's outside this community and which forms have the border of this community, where are the entry points, etc. + +### 4th Oct + +Refactor of the Padliography to simplify and comment the code. diff --git a/projects/ooc-summer-session/documentation.md b/projects/ooc-summer-session/documentation.md index 9786637..1d01d4e 100644 --- a/projects/ooc-summer-session/documentation.md +++ b/projects/ooc-summer-session/documentation.md @@ -1,8 +1,8 @@ --- categories: -- Performance -- Web -- VR + - Performance + - Web + - VR date: 07/07/2022 description: Dev log for Object Oriented Choreography slug: ooc-summer-session @@ -12,226 +12,228 @@ cover_alt: ooc but is a workshop --- This is some sort of devlog from the Summer Session residency at V2. -Object Oriented Choreography got selected from Sardegna Teatro and we were sent in the Netherlands* to work for the whole summer**. +Object Oriented Choreography got selected from Sardegna Teatro and we were sent in the Netherlands\* to work for the whole summer\*\*. \* actually i was already here \*\* actually my plan for the summer was really different but what can i do Take everything as a WIP because it's literally that - - ![v2 from above](v2top.jpg) - ## Workshop? The third iteration of OOC could be two-parts workshop: in the first moment participants co-design a VR environment with a custom multiplayer 3D editor. The second half is a dance workshop with a choreographer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a catalyst, transformed from a single-user device to a wandering and plural experience. 1. design the VR environment together - * to encode information with spatial and body knowledges - * to achieve meaningful and expressive interactivity through participation - * to take into accounts multiple and situated points of view - - * starting point is the essay on the Zone written for the previous iteration - * [the essay](https://www.neroeditions.com/object-oriented-choreography/) - * excerpts used as prompt, focus on using space as an interface with the body + - to encode information with spatial and body knowledges + - to achieve meaningful and expressive interactivity through participation + - to take into accounts multiple and situated points of view + - starting point is the essay on the Zone written for the previous iteration + - [the essay](https://www.neroeditions.com/object-oriented-choreography/) + - excerpts used as prompt, focus on using space as an interface with the body 2. explore the collective VR environment - * to decode information with spatial and body knowledges - * to transform vr in a shared device - * who is inside the VR trusts the ones outside - * the ones outside the VR take care of who's inside + - to decode information with spatial and body knowledges + - to transform vr in a shared device + - who is inside the VR trusts the ones outside + - the ones outside the VR take care of who's inside - * performative workshop - * stretching and warming up - * excercises: moving with hybrid space - * improvisation - + - performative workshop + - stretching and warming up + - excercises: moving with hybrid space + - improvisation ### Outcomes: 1. documentation of the workshop - 2. a different 3d environment for each iteration of the workshop i.e digital gallery ? + 2. a different 3d environment for each iteration of the workshop i.e digital gallery ? 3. the 3D editor +## first part - design the VR environment + +- how? with a custom 3D editor + + - a kind of tiltbrush? + - super simple editor, limited functionality + - work with volumes, maybe images (texture)? maybe text? + +- how do we deal with the multiplayer aspect ? + - how do we deal with the temporality of creation? + - how can participants collaborate together if there is only 1 VR system? -## first part - design the VR environment +- take into acocunts that our VR system is: -* how? with a custom 3D editor - * a kind of tiltbrush? - * super simple editor, limited functionality - * work with volumes, maybe images (texture)? maybe text? + - headset + - 2 controllers + - 3 motion trackers -* how do we deal with the multiplayer aspect ? - * how do we deal with the temporality of creation? - * how can participants collaborate together if there is only 1 VR system? +- think to collaborative uses of these six pieces of hardware -* take into acocunts that our VR system is: - * headset - * 2 controllers - * 3 motion trackers + - mixed editor ? + - accessible from outside the vr system ? + - like a web interface and a vr interface ? multiplayer with different kind of access and functionality ? + - like vr for the volumes and web for images and text ? -* think to collaborative uses of these six pieces of hardware - * mixed editor ? - * accessible from outside the vr system ? - * like a web interface and a vr interface ? multiplayer with different kind of access and functionality ? - * like vr for the volumes and web for images and text ? +- how techically ? -* how techically ? - * VR interface - * vvvv [https://visualprogramming.net/](https://visualprogramming.net/) - * [https://www.stride3d.net/](https://www.stride3d.net/) - * openVR + - VR interface - * for modelling volumes: dynamic meshes with marching cubes - * for UI: [https://github.com/ocornut/imgui](https://github.com/ocornut/imgui) + - vvvv [https://visualprogramming.net/](https://visualprogramming.net/) + - [https://www.stride3d.net/](https://www.stride3d.net/) + - openVR - * Web interface - * three.js - * vue.js + - for modelling volumes: dynamic meshes with marching cubes + - for UI: [https://github.com/ocornut/imgui](https://github.com/ocornut/imgui) + - Web interface + - three.js + - vue.js -* references - * (VR) 3d editor research - * [https://www.tiltbrush.com/#get-it](https://www.tiltbrush.com/#get-it) - * [https://openbrush.app/](https://openbrush.app/) - * [https://www.kodon.xyz/#faq](https://www.kodon.xyz/#faq) - * [https://masterpiecestudio.com/](https://masterpiecestudio.com/) - * [https://www.adobe.com/products/medium.html](https://www.adobe.com/products/medium.html) +- references -* see - * terraforming - sebastian lague [https://www.youtube.com/watch?v=vTMEdHcKgM4](https://www.youtube.com/watch?v=vTMEdHcKgM4), for volumes modelling - * my inner wolf - studio moniker [https://studiomoniker.com/projects/myinnerwolf](https://studiomoniker.com/projects/myinnerwolf), for the multiplayer work with images and textures + - (VR) 3d editor research + - [https://www.tiltbrush.com/#get-it](https://www.tiltbrush.com/#get-it) + - [https://openbrush.app/](https://openbrush.app/) + - [https://www.kodon.xyz/#faq](https://www.kodon.xyz/#faq) + - [https://masterpiecestudio.com/](https://masterpiecestudio.com/) + - [https://www.adobe.com/products/medium.html](https://www.adobe.com/products/medium.html) +- see + - terraforming - sebastian lague [https://www.youtube.com/watch?v=vTMEdHcKgM4](https://www.youtube.com/watch?v=vTMEdHcKgM4), for volumes modelling + - my inner wolf - studio moniker [https://studiomoniker.com/projects/myinnerwolf](https://studiomoniker.com/projects/myinnerwolf), for the multiplayer work with images and textures ### what's the plan: -- transition from performance to workshop -- participative forms of interaction ? -- simplify what's there already +- transition from performance to workshop +- participative forms of interaction ? +- simplify what's there already ### what's the point? -- make sense together of a complex, contradictory system such as : the massive digital infrastructure +- make sense together of a complex, contradictory system such as : the massive digital infrastructure -- what does it mean: to make sense together? to accept the limits of our own individual description and join others to have a better view (renegotiation of complexity?) +- what does it mean: to make sense together? to accept the limits of our own individual description and join others to have a better view (renegotiation of complexity?) -### what are our roles here? +### what are our roles here? -- facilitators? -- to provide some tools and a context around them -- which kind of tools +- facilitators? +- to provide some tools and a context around them +- which kind of tools -## Mapping the algorithm +## Mapping the algorithm -Our technological environment is made of abstract architectures built of hardware, software and networks. These abstract architectures organize information, resources, bodies, time; in fact they organize our life. Yet, they can be really obscure and difficult to grasp, even to imagine. +Our technological environment is made of abstract architectures built of hardware, software and networks. These abstract architectures organize information, resources, bodies, time; in fact they organize our life. Yet, they can be really obscure and difficult to grasp, even to imagine. Within VR we can transform these abstract architecture into virtual ones: spaces that are modelled on the nature, behaviour, and power relations around specifc technologies. Places that constraint the movements of our body and at the same time can be explored with the same physical knowledge and awareness. -Starting from one specific architecture we model and map it together with the public. +Starting from one specific architecture we model and map it together with the public. - -This iteration of OOC is a performance with the temporality of a two-parts workshop: in the first moment participants model together the virtual environment with a custom VR editor, that let them create the space in 1:1 scale. +This iteration of OOC is a performance with the temporality of a two-parts workshop: in the first moment participants model together the virtual environment with a custom VR editor, that let them create the space in 1:1 scale. The second half is a performative workshop with a choreographer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a way to tune in and out the virtual space, transformed from a single-user device to a wandering and plural experience. -Since an abstract architecture is composed of several entities interacting together, the dramaturgical structure con be written following them. The narration of the modeling workshop as well as the performative excercises from the warming up to the final improvisation can be modeled on the elements of the architecture. +Since an abstract architecture is composed of several entities interacting together, the dramaturgical structure con be written following them. The narration of the modeling workshop as well as the performative excercises from the warming up to the final improvisation can be modeled on the elements of the architecture. ~ -The idea of having the public modeling the space and exploring with the performer responds to several needs: +The idea of having the public modeling the space and exploring with the performer responds to several needs: -- a virtal space is better experienced first hand -- meaningful and expressive forms of interaction -- making sense together of black box algorithms -- participants are aware of what's happening inside the VR and so there is no need for other visual support +- a virtal space is better experienced first hand +- meaningful and expressive forms of interaction +- making sense together of black box algorithms +- participants are aware of what's happening inside the VR and so there is no need for other visual support -To make an example: the first OOC was modeled on a group chat. The connected participants were represented as *clients *placed in a big circular space, *the server*. Within the server, the performer acted as the *al*gorithm, taking messages from one user to the other. +To make an example: the first OOC was modeled on a group chat. The connected participants were represented as *clients *placed in a big circular space, _the server_. Within the server, the performer acted as the *al*gorithm, taking messages from one user to the other. -## Could it be done in a different way? +## Could it be done in a different way? Here are three scenario: ### Workshop -* a two-parts workshop: in the first moment participants co-design a VR environment with a custom multiplayer 3D editor. The second half is a dance workshop with a choreographer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a catalyst, transformed from a single-user device to a wandering and plural experience. -* it has a particular temporality: it is not intense as a performance and the pace can be adjusted to keep everyone engaged. -* it follows the idea of lecture performance, steering toward more horizontal and collaborative way of making meaning -* it provides facilitation -* cannot be done the same day as the presentation, at least a couple of days before (less time for reharsal) +- a two-parts workshop: in the first moment participants co-design a VR environment with a custom multiplayer 3D editor. The second half is a dance workshop with a choreographer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a catalyst, transformed from a single-user device to a wandering and plural experience. +- it has a particular temporality: it is not intense as a performance and the pace can be adjusted to keep everyone engaged. +- it follows the idea of lecture performance, steering toward more horizontal and collaborative way of making meaning +- it provides facilitation +- cannot be done the same day as the presentation, at least a couple of days before (less time for reharsal) ### Installation: -* There is the VR editor tool and the facilitation of the workshop is recorded as a text (maybe audio?), participants con follow it through and create the evironment while participating. the text is written with the choreographer / performer. it's a mix between the two moments of the workshop. the performer is following the same script. +- There is the VR editor tool and the facilitation of the workshop is recorded as a text (maybe audio?), participants con follow it through and create the evironment while participating. the text is written with the choreographer / performer. it's a mix between the two moments of the workshop. the performer is following the same script. -* vr is used as a single player device, intimate experience, asmr or tutorial vibe -* probably doable up to two or three people at the same time? ( should try ) -* a platform to see different results? -* how long should it be to be meaningfull for the public? at least 10 min? 15 min ? +- vr is used as a single player device, intimate experience, asmr or tutorial vibe +- probably doable up to two or three people at the same time? ( should try ) +- a platform to see different results? +- how long should it be to be meaningfull for the public? at least 10 min? 15 min ? ### Platform: -* interactions and performance happen in different moments -* user generated contents -* we gather contents online and use it to build the stage for the performance -* there is a platform in which people can build space? does it make sense if it's not done with vr things ? aka involving directly the body? - +- interactions and performance happen in different moments +- user generated contents +- we gather contents online and use it to build the stage for the performance +- there is a platform in which people can build space? does it make sense if it's not done with vr things ? aka involving directly the body? ## A draft timetable -* week 1 \_ 18-24 jul - * define concept - * draft scenario - * define process - * schedule and timetable - * plan outcomes - * presentation (with visual ref and examples) - -* week 2 \_ 25-31 jul - * research and writing for workshop - * technical setup research for editor - * vr editor research and experiments - * understand logistic for workshop moments? - - * 25 - 26: - * workshop research - * book of shaders - - * 27 - * book of shaders - * setup vr editor basic - - * 28 - * workshop research / writing - * book of shaders - * setup vr editor prototype - - * 29 - * workshop research / writing - * book of shaders - * setup vr editor prototype - - * 30~31 buffer - * meeting with sofia - * meeting with ste \& iulia - * update log and sardegna - -* week 3 \_ 1-7 aug - * first workshop text draft - * first working prototype for vr editor - * setup reharsal - -* week 4 \_ 8-14 aug -* week 5 \_ 15-21 aug -* week 6 \_ 22-28 aug -* week 7 \_ 29-4 sep -* week 8 \_ 5-8 sep - +- week 1 \_ 18-24 jul + + - define concept + - draft scenario + - define process + - schedule and timetable + - plan outcomes + - presentation (with visual ref and examples) + +- week 2 \_ 25-31 jul + + - research and writing for workshop + - technical setup research for editor + - vr editor research and experiments + - understand logistic for workshop moments? + + - 25 - 26: + + - workshop research + - book of shaders + + - 27 + + - book of shaders + - setup vr editor basic + + - 28 + + - workshop research / writing + - book of shaders + - setup vr editor prototype + + - 29 + + - workshop research / writing + - book of shaders + - setup vr editor prototype + + - 30~31 buffer + - meeting with sofia + - meeting with ste \& iulia + - update log and sardegna + +- week 3 \_ 1-7 aug + + - first workshop text draft + - first working prototype for vr editor + - setup reharsal + +- week 4 \_ 8-14 aug +- week 5 \_ 15-21 aug +- week 6 \_ 22-28 aug +- week 7 \_ 29-4 sep +- week 8 \_ 5-8 sep ### Sparse ideas @@ -239,41 +241,40 @@ Tracker as point lights during performance (see FF light in cave) ### References -- The emergence of algorithmic solidarity: unveiling mutual aid practices and resistance among Chinese delivery workers, [read](https://journals.sagepub.com/doi/full/10.1177/1329878X221074793) -- Your order, their labor: An exploration of algorithms and laboring on food delivery platforms in China, DOI:10.1080/17544750.2019.1583676 -- The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms, DOI:10.1080/1369118X.2016.1154086 -- Algorithms as culture: Some tactics for the ethnography of algorithmic systems - [read](https://journals.sagepub.com/doi/10.1177/2053951717738104) -- Redlining the Adjacent Possible: Youth and Communities of Color -Face the (Not) New Future of (Not) Work [read](https://static1.squarespace.com/static/53c7166ee4b0e7db2be69480/t/5682b8071c12101f97a8b4df/1451407367281/Redlining+the+Adjacent+Possible\_DS4SI.pdf) - - +- The emergence of algorithmic solidarity: unveiling mutual aid practices and resistance among Chinese delivery workers, [read](https://journals.sagepub.com/doi/full/10.1177/1329878X221074793) +- Your order, their labor: An exploration of algorithms and laboring on food delivery platforms in China, DOI:10.1080/17544750.2019.1583676 +- The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms, DOI:10.1080/1369118X.2016.1154086 +- Algorithms as culture: Some tactics for the ethnography of algorithmic systems - [read](https://journals.sagepub.com/doi/10.1177/2053951717738104) +- Redlining the Adjacent Possible: Youth and Communities of Color + Face the (Not) New Future of (Not) Work [read](https://static1.squarespace.com/static/53c7166ee4b0e7db2be69480/t/5682b8071c12101f97a8b4df/1451407367281/Redlining+the+Adjacent+Possible_DS4SI.pdf) ## An overview for Sofia: -### notes from 02/22 -* focus on how the space influences the body -* public dont need to look at the phone all the time -* interaction from the public change the space -* performer not all the time inside the vr, there could be moments outside (ex. intro, outro) -* focus on drammaturgical development -* participant should recognize the results of their own interactions -* public need to see what the performer see (or to have another visual support) -> projection? +### notes from 02/22 + +- focus on how the space influences the body +- public dont need to look at the phone all the time +- interaction from the public change the space +- performer not all the time inside the vr, there could be moments outside (ex. intro, outro) +- focus on drammaturgical development +- participant should recognize the results of their own interactions +- public need to see what the performer see (or to have another visual support) -> projection? ### concept -Our technological environment is made of abstract architectures built of hardware, software and networks. These abstract architectures organize information, resources, bodies, time; in fact they organize our life. Yet, they can be really obscure and difficult to grasp, even to imagine. +Our technological environment is made of abstract architectures built of hardware, software and networks. These abstract architectures organize information, resources, bodies, time; in fact they organize our life. Yet, they can be really obscure and difficult to grasp, even to imagine. -Being in space is something everyone has in common, an accessible language. Space is a shared interface. We can use it as a tool to gain awareness and knowledge about complex systems. +Being in space is something everyone has in common, an accessible language. Space is a shared interface. We can use it as a tool to gain awareness and knowledge about complex systems. Within VR we can transform these abstract architecture into virtual ones: spaces that are modelled on the nature, behaviour, and power relations around specifc technologies. Places that constraint the movements of our body and at the same time can be explored with the same physical knowledge and awareness. (like what we did for the chat) -Starting from one specific architecture (probably the food delivery digital platforms typical of gig economy that moves riders around) we model and map it together with the public. Since an abstract architecture is composed of several entities interacting together, a strong dramaturgical structure con be written following the elements of the architecture. +Starting from one specific architecture (probably the food delivery digital platforms typical of gig economy that moves riders around) we model and map it together with the public. Since an abstract architecture is composed of several entities interacting together, a strong dramaturgical structure con be written following the elements of the architecture. ### how to - two options 1. performance as a workshop -a performance with the temporality of a two-parts workshop: in the first moment participants model together the virtual environment with a custom VR editor, that let them create the space in 1:1 scale. +a performance with the temporality of a two-parts workshop: in the first moment participants model together the virtual environment with a custom VR editor, that let them create the space in 1:1 scale. Then a performative workshop with a choreographer / performer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a way to tune in and out the virtual space, transformed from a single-user device to a wandering and plural experience. @@ -285,8 +286,6 @@ The VR editor is used as an installation. Other than the normal functionalities Both options can be activated multiple times, with different results. The resulting 3D environments can be archived on a dedicated space (like a showcase website) in order to document, (communicate, and $ell the project again for further iterations) - - ``` ___..._ _,--' "`-. @@ -307,29 +306,25 @@ Both options can be activated multiple times, with different results. The result ``` - - ### Meeting with Sofia and Iulia **ok ok ok no workshop let's stick to what we have and polish it** - -- interaction from the public change the space -- facilitate access to the website -- website: intro, brief overview -- at the beginning the performer is already there, idle mode -- (vertical) screen instead of projection ? -- from the essay to something more direct +- interaction from the public change the space +- facilitate access to the website +- website: intro, brief overview +- at the beginning the performer is already there, idle mode +- (vertical) screen instead of projection ? +- from the essay to something more direct ~ -- building block: - * text - * interaction - * space modification - -```- - [ ] - - [ ] - - [ ] - ->``` +- building block: + - text + - interaction + - space modification +`- - [ ] - - [ ] - - [ ] - ->` ### what do we need: @@ -340,66 +335,63 @@ Both options can be activated multiple times, with different results. The result * interaction * scene - ## 28/7 - Prototype setup ### app design -* vvvv client - * VR system - * timeline - * scenes - * text - * interaction - * space - * soundtrack - * cms -* web client - * pages - * main - * interaction - * sound notification - * about - * i11n +- vvvv client -* web server - * websocket server + - VR system + - timeline + - scenes + - text + - interaction + - space + - soundtrack + - cms +- web client -### small prototype: + - pages + - main + - interaction + - sound notification + - about + - i11n - * vvvv can send scenes to the server - * datatype: - * text - * interaction - * type - * counter - * xy - * context - * description - * etc +- web server + - websocket server -## 29/7 - Prototype Setup \& other +### small prototype: +- vvvv can send scenes to the server + - datatype: + - text + - interaction + - type + - counter + - xy + - context + - description + - etc +## 29/7 - Prototype Setup \& other The building block is the Stage. Each stage is a description of what's happening at the edge of the performance: what the screen is displaying, what's inside the VR, what's happening on users' smartphone. We can place a series of stages on a timeline and write a dramaturgy that it's based on the relation between these three elements. -The model of the *stage* is something like this: - -* text -* scene -* interaction - * type - * context - +The model of the _stage_ is something like this: -- `text` is the text that is gong to be displayed on the screen -- `scene` contains some info or setup for the scene in the vr environment -- `interaction` holds the type of the interaction and other additional info stored as a context +- text +- scene +- interaction + - type + - context +* `text` is the text that is gong to be displayed on the screen +* `scene` contains some info or setup for the scene in the vr environment +* `interaction` holds the type of the interaction and other additional info stored as a context `text` and `scene` are meant to be used in vvvv to build the vr environment and the screen display @@ -416,19 +408,19 @@ It could be something like: ### about text - interaction -even if we can take out excerpts from the essay we wrote, this reading setup is totally different. here our texts need to be formulated like a call to action, or a provocation to trigger the interaction. +even if we can take out excerpts from the essay we wrote, this reading setup is totally different. here our texts need to be formulated like a call to action, or a provocation to trigger the interaction. -a way to acknowledge the public +a way to acknowledge the public ## 31/07 - Prototype setup: vvvv The websocket implementation im using is simple. It just provides this kind of events: -- on open (when a client connect) -- on close (when a client disconnect) -- on message (when there is a message incoming) +- on open (when a client connect) +- on close (when a client disconnect) +- on message (when there is a message incoming) -In order to distinguish between different types of message I decided to serialize every text as a JSON string with a field named *type*. When a message event is fired the server goes and look at the type of the message and then acts consequently. Every message triggers a different reaction aka it calls a different function. +In order to distinguish between different types of message I decided to serialize every text as a JSON string with a field named _type_. When a message event is fired the server goes and look at the type of the message and then acts consequently. Every message triggers a different reaction aka it calls a different function. In the previous versions the check on the message type was a loong chain of if statements, but that didn't feel right, so I searched a bit how to manage it in a better way. @@ -436,33 +428,30 @@ In the server (node.js) i created an object that uses as keys the type of the me For vvvv I asked some suggestion in the vvvv forum and ended up using a simple factory pattern that uses a common interface IMessage and then use it to process the incoming message based on the type. [replacing long if chain](https://discourse.vvvv.org/t/replacing-loong-if-chain/20707/3) -In order to deal with the state of the application (each message operate in a different way and on different things) I created a Context class that holds the global state of the performance such as the websocket clients, and the connected users. The IMessage interface take this context as well as the incoming message and so it can operate on the patch. +In order to deal with the state of the application (each message operate in a different way and on different things) I created a Context class that holds the global state of the performance such as the websocket clients, and the connected users. The IMessage interface take this context as well as the incoming message and so it can operate on the patch. happy with it! it's much more flexible than the long if snake -## 1-2/08 - two Displays & Prototype setup +## 1-2/08 - two Displays & Prototype setup ![One screen mounted vertically](screen.jpg) -Yesterday together with Richard we setup the two screens to show what's happening inside the VR for the public. Initially they were mounted next to each other, in vertical. +Yesterday together with Richard we setup the two screens to show what's happening inside the VR for the public. Initially they were mounted next to each other, in vertical. -With Iulia we thought how to place them. Instead of keeping them together probably it would be better to use them at the edge of the interactive zone. Even if the screen surface seems smaller, it's a creative constraint \& it creates more the space of the performance. +With Iulia we thought how to place them. Instead of keeping them together probably it would be better to use them at the edge of the interactive zone. Even if the screen surface seems smaller, it's a creative constraint \& it creates more the space of the performance. - -Ideallly the viewer can see at the same time both screens and the performer. -The screens can display either the same or different things. +Ideallly the viewer can see at the same time both screens and the performer. +The screens can display either the same or different things. ![Two screens with frogs from Katamari](frogs.jpg) ![Two screens mapping the same space](teacup.jpg) - And now some general thoughts: the username should be central in the visualization of the interaction, since it's the main connection point between between whats happening outside and inside? could it be something different than a name? could it be a color? using a drawing as an avatar? ![OOC title + hand drawn avatar](avatar.jpg) - ### types of interaction the idea of presence, of being there, together and connected @@ -479,121 +468,127 @@ maybe it could be enough ? just use the touchscreen as a pointer xy and make the nature of the pointer changes - ## 3/08 - Prototype Setup and doubts -Finished to setup the xy interaction with the clients and vvvv. +Finished to setup the xy interaction with the clients and vvvv. -The setup with nuxt is messy since it's stuck between nuxt 2 and vue 3. There are a lot of errors that don't depend on the application but rather to the dependencies and it's really really annoying, especially since it prevents solid design principles. +The setup with nuxt is messy since it's stuck between nuxt 2 and vue 3. There are a lot of errors that don't depend on the application but rather to the dependencies and it's really really annoying, especially since it prevents solid design principles. -I'm tihnking to rewrite the web app using only Vue, instead of nuxt, but im a bit afraaaaaidd. +I'm tihnking to rewrite the web app using only Vue, instead of nuxt, but im a bit afraaaaaidd. +## 4/08 - Script -## 4/08 - Script +Im trying to understand which setup to use to rewrite the application without nuxt. Currently im looking into fastify + vite + vue, but it's too many things altogether and im a bit overwhelmed. -Im trying to understand which setup to use to rewrite the application without nuxt. Currently im looking into fastify + vite + vue, but it's too many things altogether and im a bit overwhelmed. - -So now a break and let's try to list what we need and the ideas that are around to organize the work of the next week. +So now a break and let's try to list what we need and the ideas that are around to organize the work of the next week. ### Hardware Setup: -- 2 vertical displays, used also as vive basestation support, place at opposite corners of the stage -- PC with Vive connection in the third corner -- Public stands around +- 2 vertical displays, used also as vive basestation support, place at opposite corners of the stage +- PC with Vive connection in the third corner +- Public stands around +### Performance Structure -### Performance Structure +**0. before the performance** +- _the two screens loops:_ -**0. before the performance** + - Object Oriented Choreography v3.0 (with versioning? it's funny) + - Connect to the website to partecipate to the performance + - o-o-c.org + +- _website:_ -* *the two screens loops:* - * Object Oriented Choreography v3.0 (with versioning? it's funny) - * Connect to the website to partecipate to the performance - * o-o-c.org + - access page (choose a name or draw a simple avatar) + - waiting room with short introduction: what is this and how does it work. In 2 sentences. -* *website:* - * access page (choose a name or draw a simple avatar) - * waiting room with short introduction: what is this and how does it work. In 2 sentences. +- _stage:_ -* *stage:* - * performer in idle mode, already inside the vr - * user connected trigger minimal movements? + - performer in idle mode, already inside the vr + - user connected trigger minimal movements? -* *sound:* - * first pad sloowly fade in ? +- _sound:_ + - first pad sloowly fade in ? **1. performance starts, first interaction: touch** -* *two screens:* - * direct feedback of interaction - * representation of the space inside the vr - * position of the performer inside the vr (point light?) +- _two screens:_ + + - direct feedback of interaction + - representation of the space inside the vr + - position of the performer inside the vr (point light?) + +- _website:_ -* *website:* - * touch interaction. users are invited to keep pressed the touchscreen. - * a sentence to create context aroud the interaction? maybe not, because: - * to interact the user doesn't need to look at the phone, it's more an intuitive and physical thing + - touch interaction. users are invited to keep pressed the touchscreen. + - a sentence to create context aroud the interaction? maybe not, because: + - to interact the user doesn't need to look at the phone, it's more an intuitive and physical thing -* *stage:* - * performer st. thomas movement to invite for the touch interaction - * the public is invited to follow the performer ? (for example releasing the touch improvisely, some kind of slow rythm, touch pattern, explore this idea as introduction?) +- _stage:_ -* *sound:* - * ost from PA - * interaction from the phones + - performer st. thomas movement to invite for the touch interaction + - the public is invited to follow the performer ? (for example releasing the touch improvisely, some kind of slow rythm, touch pattern, explore this idea as introduction?) -* *interaction:* - * every user is an object in the VR space, placed in a random-supply-chain to build a meaningful space for the choreography. the object is visible only when the user is touching the screen. - * the performer can activate these objects by getting closer - * when an object is activated it sends a notification to the smartphone of the user, that play some sound effect - * build on this composition +- _sound:_ - * **bonus**: the more the user keep pressed, the bigger the object grows? so it's activated more frequently and this could lead to some choir and multiple activations at the same time? + - ost from PA + - interaction from the phones -```[]need a transition[]``` +- _interaction:_ + + - every user is an object in the VR space, placed in a random-supply-chain to build a meaningful space for the choreography. the object is visible only when the user is touching the screen. + - the performer can activate these objects by getting closer + - when an object is activated it sends a notification to the smartphone of the user, that play some sound effect + - build on this composition + + - **bonus**: the more the user keep pressed, the bigger the object grows? so it's activated more frequently and this could lead to some choir and multiple activations at the same time? + +`[]need a transition[]` **2. second interaction: XY** -* *two screens:* - * one screen show the representation of the space seen from the outside, kinda aerial view - * one screen show focus on one user at the time - * for object to go aroud think to the kind of animations of everything for example +- _two screens:_ + + - one screen show the representation of the space seen from the outside, kinda aerial view + - one screen show focus on one user at the time + - for object to go aroud think to the kind of animations of everything for example -* *website:* - * touch xy interaction - * double tap to recognize which one are you? with visual feedback like hop. maybe not necesary +- _website:_ -* *stage:* - * for sure the beginning of the interaction will be super chaotic, with everyone going around like crazy. - * the goal could be to go from this initial chaos to some kind of circular pattern, that seems the most iconic and easy thing - * the performer invites to circular movements, growing in intensity. - * actually this could be a great finale, using the same finale of the last iteration + - touch xy interaction + - double tap to recognize which one are you? with visual feedback like hop. maybe not necesary -* *sound:* - * ost from PA - * focus notification (the smartphone rings when the user is in focus on the screen) +- _stage:_ -* *interaction:* - * users are invited to use the touchscreen as a trackpad, to move into the space. - * how not to be ultra chaotic from the start? or: - * how to facilitate this chaos toward something more organic + - for sure the beginning of the interaction will be super chaotic, with everyone going around like crazy. + - the goal could be to go from this initial chaos to some kind of circular pattern, that seems the most iconic and easy thing + - the performer invites to circular movements, growing in intensity. + - actually this could be a great finale, using the same finale of the last iteration +- _sound:_ -*would be nice to have a camera system that let you position the camera in preview mode and then push it to one of the screens, overriding the preset* + - ost from PA + - focus notification (the smartphone rings when the user is in focus on the screen) +- _interaction:_ + - users are invited to use the touchscreen as a trackpad, to move into the space. + - how not to be ultra chaotic from the start? or: + - how to facilitate this chaos toward something more organic + +_would be nice to have a camera system that let you position the camera in preview mode and then push it to one of the screens, overriding the preset_ ## 5-08 -*Notes from the video of OOC@Zone Digitali. The name of the movements refer to the essay triggers.* +_Notes from the video of OOC@Zone Digitali. The name of the movements refer to the essay triggers._ ### list of triggers: -- *Is performer online?* - is great for the beginning. It could start super minimal and imperceptible, transition from the idle mode to the beginning of the performance, with slowly increasing intensity +- _Is performer online?_ + is great for the beginning. It could start super minimal and imperceptible, transition from the idle mode to the beginning of the performance, with slowly increasing intensity -- *San Tommaso*, *Janus* - StThomash could be an opening, for an explicit invitation to the touch interaction. hold the position and insist. +- _San Tommaso_, _Janus_ + StThomash could be an opening, for an explicit invitation to the touch interaction. hold the position and insist. also @@ -601,66 +596,64 @@ So now a break and let's try to list what we need and the ideas that are around `↓↑` -- *Fingertips*, *Scribble* - are a good way to elaborate on the idea of touch interaction. focus on fingers as well as focus on the surface those fingers are sensing. Bring new consistency to the touchscreen, transform its flat and smooth surface to something else. +- _Fingertips_, _Scribble_ + are a good way to elaborate on the idea of touch interaction. focus on fingers as well as focus on the surface those fingers are sensing. Bring new consistency to the touchscreen, transform its flat and smooth surface to something else. -- *Perimetro*, *Area* - Nice explorative qualities. - Could be used for notification composition during the first interaction? +- _Perimetro_, _Area_ + Nice explorative qualities. + Could be used for notification composition during the first interaction? After the invitation, a moment of composition. ~ -- *Tapping*, *Scrolling * - floor movements for a second part ? between interaction touch and xy +- _Tapping_, _Scrolling _ + floor movements for a second part ? between interaction touch and xy -- *Logic & Logistic*, *Efficiency* - Stationary movement that could introduce the performer point of view. The body is super expressive and the head is still, so the point of view in the VR is not crazy from the start. +- _Logic & Logistic_, _Efficiency_ + Stationary movement that could introduce the performer point of view. The body is super expressive and the head is still, so the point of view in the VR is not crazy from the start. -- *Knot*, *Velocity* - The stationary movement could then start traversing more the space, integrating also the quality and intensity of efficiency and velocity. +- _Knot_, _Velocity_ + The stationary movement could then start traversing more the space, integrating also the quality and intensity of efficiency and velocity. ~ -- *Scrolling* - could be used during the xy interaction, again as a form of invitation -- *Collective Rituals* - the final sequence that builds on a circular pattern of the xy interaction, slower and slower -- *Optical* -- *Glitch* -- *Fine* - +- _Scrolling_ + could be used during the xy interaction, again as a form of invitation +- _Collective Rituals_ + the final sequence that builds on a circular pattern of the xy interaction, slower and slower +- _Optical_ +- _Glitch_ +- _Fine_ -Need to finish this analysis but for now here is a draft structure for the performance. Eventually will integrate it with the previous two sections: the Performance Structure and the trigger notes. +Need to finish this analysis but for now here is a draft structure for the performance. Eventually will integrate it with the previous two sections: the Performance Structure and the trigger notes. ### Structure? **I** Invitation and definition of the domain: touch interaction and public partecipation -* a. invitation - * extend the extents of the touchscreen - * create a shared consistence for the screen surface -* b. composition - * explore it as a poetic device -* -**II** +- a. invitation + - extend the extents of the touchscreen + - create a shared consistence for the screen surface +- b. composition + - explore it as a poetic device +- **II** ???? **III** -from partecipation to collective ritual - - +from partecipation to collective ritual ## 6-08 + Two ideas for the performance: ### a. Abstract Supply Chain -*--> about the space where the performer dances* -The space in the virtual environment resemble more an Abstract Supply Chain instead of an architectural space. It's an environment not made by walls, floor, and ceiling, but rather a landscape filled with objects and actors, the most peculiar one being the performer. +_--> about the space where the performer dances_ + +The space in the virtual environment resemble more an Abstract Supply Chain instead of an architectural space. It's an environment not made by walls, floor, and ceiling, but rather a landscape filled with objects and actors, the most peculiar one being the performer. We can build a model that scan scales with the connection of new users. Something that has sense with 10 people connected as well as 50. Something like a fractal, that is legible at different scales and intensities. @@ -670,13 +663,14 @@ Lot of interesting input here: [Remystifying supply chains](https://studio.ribbonfarm.com/p/remystifying-supply-chains) ### b. Object Oriented Live Action RolePlay (LARP) -*--> about the role of the public* -We have a poll of 3d object related to our theme: delivery packages, bike, delivery backpack, kiva robot, drone, minerals, rack, servers, gpu, container, etc. a proper bestiary of the zone. +_--> about the role of the public_ + +We have a poll of 3d object related to our theme: delivery packages, bike, delivery backpack, kiva robot, drone, minerals, rack, servers, gpu, container, etc. a proper bestiary of the zone. -Every user is assigned to an object at login. The object you are influences more or less also your behavior in the interaction. Im imagining it in a subtle way, more something related to situatedness than theatrical acting. An object oriented LARP. +Every user is assigned to an object at login. The object you are influences more or less also your behavior in the interaction. Im imagining it in a subtle way, more something related to situatedness than theatrical acting. An object oriented LARP. -How wide or specific our bestiary should be? A whole range of different object and consistency (mineral, vegetal, electronical, etc.) or just one kind of object (shipping parcels for example) explored in depth? +How wide or specific our bestiary should be? A whole range of different object and consistency (mineral, vegetal, electronical, etc.) or just one kind of object (shipping parcels for example) explored in depth? From here --> visual identity with 3D scan? @@ -685,34 +679,31 @@ From here --> visual identity with 3D scan? - ### The Three Interactions -All the interactions are focused on the physical use of touchscreen. They are simple and intuitive gestures, that dialogue with the movements of the performer. +All the interactions are focused on the physical use of touchscreen. They are simple and intuitive gestures, that dialogue with the movements of the performer. There are three section in the performance and one interaction for each. We start simple and gradually add something, in order to introduce slowly the mecanishm. -The three steps are: - -1. *presence* -2. *rythm* -3. *space* +The three steps are: +1. _presence_ +2. _rythm_ +3. _space_ -*Presence* is the simple act of touching and keep pressing the screen. Ideally is an invite for the users to keep their finger on the screen the whole time. A way for the user to say: hello im here, im connected. For the first part of the performance the goal is to transform the smooth surface of the touchscreen in something more. A sensible interface, a physical connection with the performer, a shared space. +_Presence_ is the simple act of touching and keep pressing the screen. Ideally is an invite for the users to keep their finger on the screen the whole time. A way for the user to say: hello im here, im connected. For the first part of the performance the goal is to transform the smooth surface of the touchscreen in something more. A sensible interface, a physical connection with the performer, a shared space. -*Rythm* takes into account the temporality of the interaction. The touch and the release. It gives a little more of freedom to the users, without being too chaotic. This interaction is used to trigger events in the virtual environment such as the coming into the world of the object. - -*Space* is the climax of the interaction and map the position on the touchscreen into the VR environment. It allows the user to move around in concert with the other participants and the performer. Here the plan is to take the unreasonable chaos of the crowd interacting and building something choreographic out of it, with the same approach of the collective ritual ending of the previous iteration. +_Rythm_ takes into account the temporality of the interaction. The touch and the release. It gives a little more of freedom to the users, without being too chaotic. This interaction is used to trigger events in the virtual environment such as the coming into the world of the object. +_Space_ is the climax of the interaction and map the position on the touchscreen into the VR environment. It allows the user to move around in concert with the other participants and the performer. Here the plan is to take the unreasonable chaos of the crowd interacting and building something choreographic out of it, with the same approach of the collective ritual ending of the previous iteration. **Each section / interaction is developed in two parts:** -- *an initial moment of invitation* where the performer introduces the interaction and offer it to the user via something similar to the functioning of mirror neurons. Imagine the movement for St.Thomas as invitation to keep pressing the touchscreen. +- _an initial moment of invitation_ where the performer introduces the interaction and offer it to the user via something similar to the functioning of mirror neurons. Imagine the movement for St.Thomas as invitation to keep pressing the touchscreen. - It is a moment that introduces the interaction to the public in a practical way, instead of following a series of cold instruction. It is also a way to present the temporality and the rythm of the interaction. + It is a moment that introduces the interaction to the public in a practical way, instead of following a series of cold instruction. It is also a way to present the temporality and the rythm of the interaction. -- *a following moment of composition*, in which the interactive mechanism is explored aesthetically. For *Presence* is the way the performer interact with the obejct inside the space. For *Space* is facilitating and leading the behaviour of the users from something chaotic to something organic (from random movements to a circular pattern?) +- _a following moment of composition_, in which the interactive mechanism is explored aesthetically. For _Presence_ is the way the performer interact with the obejct inside the space. For _Space_ is facilitating and leading the behaviour of the users from something chaotic to something organic (from random movements to a circular pattern?) ### Tech Update @@ -721,16 +712,16 @@ Started having a look at reactive programming. Since everything here is based on Starting from here: [The introduction to Reactive Programming you've been missing](https://gist.github.com/staltz/868e7e9bc2a7b8c1f754) -For notification and audio planning to use howler.js, probably with sound sprites to pack different sfx into one file. +For notification and audio planning to use howler.js, probably with sound sprites to pack different sfx into one file. [https://github.com/goldfire/howler.js](https://github.com/goldfire/howler.js) ## 10/08 and 9/08 and 11/08 ### second interaction -*how to call the Three Interactions? TI? 3I ? III I ? ok stop* +_how to call the Three Interactions? TI? 3I ? III I ? ok stop_ -it's usefull to imagine the lifecycle of the object to think about the three interactions. +it's usefull to imagine the lifecycle of the object to think about the three interactions. ``` 1 presence___presence____being there @@ -742,286 +733,289 @@ it's usefull to imagine the lifecycle of the object to think about the three int So for what concerns the second interaction: -- could be related to the configuration of the object, a way to be more or less structured -- for example start from totally deconstructed object and gradually morph into it's normal state -- an assemblage of different parts +- could be related to the configuration of the object, a way to be more or less structured +- for example start from totally deconstructed object and gradually morph into it's normal state +- an assemblage of different parts Following the timeline of the performance we could setup a flow of transformation for every object: at the beginning displacing randomly the object, messing around with its parts. We could gradually dampen the intensity of these transformations, reaching in the end a regular model of the object. This transformations are not continous, but triggered by the tap of the user. They could be seen as snapshots or samples of the current level of transformation. In this way, either with high & low sample rate we can get a rich variation amount. This means that if we have a really concitated moment with a lot of interactions the transformations are rich as well, with a lot of movements and randomness. But the same remains true when the rythm of interaction is low and more calm: it only get the right amount of dynamic. -One aspect that worries me is that these transformation could feel totally random without any linearity or consistency. I found a solution to this issue by applying some kind of uniform transformation to the whole object, for example a slow, continous rotation. In this way the object feels like a single entity even when all its parts are scattered around randomly. +One aspect that worries me is that these transformation could feel totally random without any linearity or consistency. I found a solution to this issue by applying some kind of uniform transformation to the whole object, for example a slow, continous rotation. In this way the object feels like a single entity even when all its parts are scattered around randomly. The transformation between the displaced and the regular states should take into account what I called *incremental legibility, *that is: -- progressively transform more feature (position, rotation, scale, texture, colors, etc) -- progressively decrease intensity of the transformations - -in this way we could obtain some kind of *convergence* of the randomness. +- progressively transform more feature (position, rotation, scale, texture, colors, etc) +- progressively decrease intensity of the transformations -Actually the prototype works fine just with the decreasing intensity, i didn't tried yet to transform the different features individually or in a certain order. +in this way we could obtain some kind of _convergence_ of the randomness. -Also: displacing the textures doesn't look nice. It just feels broken and glitchy, not really an object. +Actually the prototype works fine just with the decreasing intensity, i didn't tried yet to transform the different features individually or in a certain order. +Also: displacing the textures doesn't look nice. It just feels broken and glitchy, not really an object. - **for what concerns the display:** 1. in one screen we cluster all the objects in a plain view, something like a grid (really packed i presume? it depends on the amount) -2. in the other we could keep them as they were in the first interaction, and present them through the point of view of the performer, keeping the sound notification when she gets closer and working as a close-up device. +2. in the other we could keep them as they were in the first interaction, and present them through the point of view of the performer, keeping the sound notification when she gets closer and working as a close-up device. we could also display the same thing in two screens, to lower the density of object and focus more on the relationship between the performer and the public as a whole, attuning the rythm how this interaction interacts with the choreography? is it enough for the performer to be just a point of view? - -## the big practical recap +## the big practical recap ### 0. Intro -- *Interaction* - * user logs in the website - * there is a brief introduction - * there are simple instructions: turn up volume - * he can either select a 3d object or it's given one randomly ??? +- _Interaction_ + + - user logs in the website + - there is a brief introduction + - there are simple instructions: turn up volume + - he can either select a 3d object or it's given one randomly ??? + +- _Stage_ -- *Stage* - * performer in idle mode, already inside the VR - * user connected trigger minimal movements, imperceptible - * references: - - isPerformerOnline? trigger + - performer in idle mode, already inside the VR + - user connected trigger minimal movements, imperceptible + - references: + - isPerformerOnline? trigger -- *Two screens (loops)* - * Object Oriented Choreography v3.0 - * To partecipate in the perfomance connect to o-o-c.org - * QR code +- _Two screens (loops)_ -- *Website* - * Start page - brief overview + username + enter button - * Instruction + Waiting room + - Object Oriented Choreography v3.0 + - To partecipate in the perfomance connect to o-o-c.org + - QR code -- *Sound* - * first pad slowly fade in +- _Website_ -- *TODOs* - * Screens UI: + - Start page - brief overview + username + enter button + - Instruction + Waiting room + +- _Sound_ + + - first pad slowly fade in + +- _TODOs_ + - Screens UI: 1. title 2. call to connect 3. website url - * Website UI: + - Website UI: 1. Start page - brief overview + enter? 2. Instructions 3. Username input or alternate identifier 4. Waiting room - * VR: + - VR: 1. New User notification ### 1. Presence -*Presence* is the simple act of touching and keep pressing the screen. Ideally is an invite for the users to keep their finger on the screen the whole time. A way for the user to say: hello im here, im connected. For the first part of the performance the goal is to transform the smooth surface of the touchscreen in something more. A sensible interface, a physical connection with the performer, a shared space. - -- *Interaction* - * activated by touch press. - * user needs to keep pressed in order to stay connected to the performer. - * when users press they appear in the virtual environment in the form of 3D object. - * The object is still, it's position in the space is defined as Abstract Supply Chain structure. - * When the performer gets close to users, a notification is sent to their smartphone and plays some sound effects. - * Every object has its own sprite of one-shot sounds - **bonus**: the more the user keep pressed, the bigger the object grows? so it's activated more frequently and this could lead to some choir and multiple activations at the same time? - -- *Stage* - * the performer guides the public to mimic her at the beginning with the invitation - * the disposition of objects in space offers a structure for composition - * references: +_Presence_ is the simple act of touching and keep pressing the screen. Ideally is an invite for the users to keep their finger on the screen the whole time. A way for the user to say: hello im here, im connected. For the first part of the performance the goal is to transform the smooth surface of the touchscreen in something more. A sensible interface, a physical connection with the performer, a shared space. + +- _Interaction_ + + - activated by touch press. + - user needs to keep pressed in order to stay connected to the performer. + - when users press they appear in the virtual environment in the form of 3D object. + - The object is still, it's position in the space is defined as Abstract Supply Chain structure. + - When the performer gets close to users, a notification is sent to their smartphone and plays some sound effects. + - Every object has its own sprite of one-shot sounds + **bonus**: the more the user keep pressed, the bigger the object grows? so it's activated more frequently and this could lead to some choir and multiple activations at the same time? + +- _Stage_ + + - the performer guides the public to mimic her at the beginning with the invitation + - the disposition of objects in space offers a structure for composition + - references: 1. st. thomas, scribble, fingertips triggers (invitation) 2. perimetro, area triggers (composition) -- *Two screens* - * Direct feedback of interactions - * Representation of the space inside the VR? Structure of the ASC - * Position of the performer inside the VR +- _Two screens_ + + - Direct feedback of interactions + - Representation of the space inside the VR? Structure of the ASC + - Position of the performer inside the VR -- *Website* - * press interaction. users are invited to keep pressed the touchscreen. - * touch feedback - * audio feedback when performer get closer +- _Website_ + - press interaction. users are invited to keep pressed the touchscreen. + - touch feedback + - audio feedback when performer get closer -- *Sound* - * Audio track from PA - * Interaction sfx from the phones +- _Sound_ -- *TODOs* - * Sounds for the objects - * Soundcheck to balance PA and smartphone volume - * 3D objects - - which objects? - - how many? - - how to provide variation? - - find & clean - * VR - - define the disposition in 3D environment --> abstract supply chain - * Screens: - - define how the object are displayed: how to get to the second interaction? - - Shall we see the object right from the start? Or they could be introduced later? + - Audio track from PA + - Interaction sfx from the phones +- _TODOs_ + - Sounds for the objects + - Soundcheck to balance PA and smartphone volume + - 3D objects + - which objects? + - how many? + - how to provide variation? + - find & clean + - VR + - define the disposition in 3D environment --> abstract supply chain + - Screens: + - define how the object are displayed: how to get to the second interaction? + - Shall we see the object right from the start? Or they could be introduced later? ### 2. Rythm -*Rythm* takes into account the temporality of the interaction. The touch and the release. It gives a little more of freedom to the users, without being too chaotic. This interaction is used to trigger events in the virtual environment such as the coming into the world of the object. - - -- *Interaction* - * activated by tap - * tapping causes transformations in the object-avatar - * at the beginning the transformations are intense, the objects totally deconstructed - * the transformations get less and less strong toward the end of the section - * reconstruction of the object - -- *Stage* - * the performer intercepts the interactions of the public: try to influence rythm and intensity - * introduce the performer's point of view in the virtual environment - * references: - - tapping (invitation) - - logic & logistic, efficiency (stationary, high intensity) - - scroll, optical triggers (stationary, low intensity) - -- *Two screens* +_Rythm_ takes into account the temporality of the interaction. The touch and the release. It gives a little more of freedom to the users, without being too chaotic. This interaction is used to trigger events in the virtual environment such as the coming into the world of the object. + +- _Interaction_ + - activated by tap + - tapping causes transformations in the object-avatar + - at the beginning the transformations are intense, the objects totally deconstructed + - the transformations get less and less strong toward the end of the section + - reconstruction of the object +- _Stage_ + + - the performer intercepts the interactions of the public: try to influence rythm and intensity + - introduce the performer's point of view in the virtual environment + - references: + - tapping (invitation) + - logic & logistic, efficiency (stationary, high intensity) + - scroll, optical triggers (stationary, low intensity) + +- _Two screens_ + 1. Cluster all the objects in a plain view, something like a grid (could be really packed?) - 2. Keep objects where they were in the first interaction, POV performer working as a close-up device. - * could also display the same thing in two screens, to lower the density of object and focus more on the relationship between the performer and the public as a whole, attuning the rythm + 2. Keep objects where they were in the first interaction, POV performer working as a close-up device. -- *Website* - * tap interaction. users are invited to tap onto the screen. - * visual instant feedback on phone - * audio feedback when performer gets close + - could also display the same thing in two screens, to lower the density of object and focus more on the relationship between the performer and the public as a whole, attuning the rythm -- *Sound* - * Audio track from PA - * Keep the sound notification from smartphone when she gets close - * Sound on smartphone could respond to tap when the performer gets close +- _Website_ -- *TODOs* - * Sounds effect - * 3D Objects import and optimization + - tap interaction. users are invited to tap onto the screen. + - visual instant feedback on phone + - audio feedback when performer gets close +- _Sound_ + + - Audio track from PA + - Keep the sound notification from smartphone when she gets close + - Sound on smartphone could respond to tap when the performer gets close + +- _TODOs_ + - Sounds effect + - 3D Objects import and optimization ### 3. Space -*Space* is the climax of the interaction and map the position on the touchscreen into the VR environment. It allows the user to move around in concert with the other participants and the performer. Here the plan is to take the unreasonable chaos of the crowd interacting and building something choreographic out of it, with the same approach of the collective ritual ending of the previous iteration. - -- *Interaction* - * activated by touch drag - * the user interact with the touchscreen as a pointer - * user drag moves the object in the VR space - * after initial chaos both performer and directors suggest circular movement - -- *Stage* - * the performer and the users move in the same space - * the performer tries to intercept the movements of the public - * working with directions, speed and intensity - * references: - - invitation ??? - - collective ritual trigger (composition) --> to outro - -- *Two screens* +_Space_ is the climax of the interaction and map the position on the touchscreen into the VR environment. It allows the user to move around in concert with the other participants and the performer. Here the plan is to take the unreasonable chaos of the crowd interacting and building something choreographic out of it, with the same approach of the collective ritual ending of the previous iteration. + +- _Interaction_ + - activated by touch drag + - the user interact with the touchscreen as a pointer + - user drag moves the object in the VR space + - after initial chaos both performer and directors suggest circular movement +- _Stage_ + + - the performer and the users move in the same space + - the performer tries to intercept the movements of the public + - working with directions, speed and intensity + - references: + - invitation ??? + - collective ritual trigger (composition) --> to outro + +- _Two screens_ + 1. view is from top and is 1:1~ with smartphone screens 2. could be either: - * the point of view of the performer - * following object - * static close-ups + - the point of view of the performer + - following object + - static close-ups + +- _Website_ + + - xy interaction. users are invited to drag around the screen. + - visual instant feedback on phone? + - audio feedback when getting closer to the performer? -- *Website* - * xy interaction. users are invited to drag around the screen. - * visual instant feedback on phone? - * audio feedback when getting closer to the performer? +- _Sound_ -- *Sound* - * Audio track from PA - * Keep the sound notification from smartphone? + - Audio track from PA + - Keep the sound notification from smartphone? -- *TODOs* - * Sounds: Better way to use notification? - * +- _TODOs_ + - Sounds: Better way to use notification? + - ### 4. Outro -- *Interaction* - * part 3. slow down till it stops +- _Interaction_ -- *Two screens* - * everything black - * overlay fade in + - part 3. slow down till it stops + +- _Two screens_ + + - everything black + - overlay fade in 1. title 2. credits 3. participants' name ? -- *Website* - * overlay fade in +- _Website_ + - overlay fade in 1. title 2. credits 3. participants' name ? 4. initial overview - -- *Stage* - * performer gets out from VR - * thanks & goodbye +- _Stage_ + - performer gets out from VR + - thanks & goodbye -- *Sound* - * Audio track from PA ends - * No more notifications from smartphone +- _Sound_ -- *TODOs* - * credits system? - * UI screens - * UI website + - Audio track from PA ends + - No more notifications from smartphone +- _TODOs_ + - credits system? + - UI screens + - UI website ## 12/08 vvvv app design - - placeholders: 0. -- intro slides loop - - 2d text and graphics - - 3d objects - (qr code for website?) -- vr UI notification for user connection +- intro slides loop + - 2d text and graphics + - 3d objects + (qr code for website?) +- vr UI notification for user connection output manager -a system where you can decide where to render: - - screen 1 - - screen 2 - - both screens - - +a system where you can decide where to render: - screen 1 - screen 2 - both screens ## 13/08 and 14/08 and 15/08 - ### Last Mile -For the objects we will focus on the Last Mile Logistics. The moment in which things shift from the global to the local, from an abstract warehouse to your doorstep. Last Mile Logistics is tentacolar, it is made of vectors that head toward you. +For the objects we will focus on the Last Mile Logistics. The moment in which things shift from the global to the local, from an abstract warehouse to your doorstep. Last Mile Logistics is tentacolar, it is made of vectors that head toward you. -We asked to Nico for some suggestions for good quality 3D models and he replied with a list from Sketchfab. Thanks a lot. +We asked to Nico for some suggestions for good quality 3D models and he replied with a list from Sketchfab. Thanks a lot. Here the one we imported already: - -- https://sketchfab.com/3d-models/warehouse-shelving-b261a6ff8d7243df9d20acaabc76aab0 -- https://sketchfab.com/3d-models/platform-trolley-fcedc57affa34dba9870cefd9668e8fe -- https://sketchfab.com/3d-models/pallet-truck-0419ee072fa4427aaadc2afbcae2894d -- https://sketchfab.com/3d-models/hand-truck-f539455450ca40df8843a602aafbda91 -- https://sketchfab.com/3d-models/set-of-cardboard-boxes-8986ba512f704ac5b253286a0d1ad8bb -- https://sketchfab.com/3d-models/plastic-crate-74d5ce81aacd4c9c82290d3f90635513 -- https://sketchfab.com/3d-models/pallet-ad8768f522184364af70b56846d10fcf -Will need to credit the authors of the models: +- https://sketchfab.com/3d-models/warehouse-shelving-b261a6ff8d7243df9d20acaabc76aab0 +- https://sketchfab.com/3d-models/platform-trolley-fcedc57affa34dba9870cefd9668e8fe +- https://sketchfab.com/3d-models/pallet-truck-0419ee072fa4427aaadc2afbcae2894d +- https://sketchfab.com/3d-models/hand-truck-f539455450ca40df8843a602aafbda91 +- https://sketchfab.com/3d-models/set-of-cardboard-boxes-8986ba512f704ac5b253286a0d1ad8bb +- https://sketchfab.com/3d-models/plastic-crate-74d5ce81aacd4c9c82290d3f90635513 +- https://sketchfab.com/3d-models/pallet-ad8768f522184364af70b56846d10fcf + +Will need to credit the authors of the models: ``` "pallet truck" (https://skfb.ly/6UV88) by Kwon_Hyuk is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/). @@ -1040,7 +1034,7 @@ Will need to credit the authors of the models: "8" (https://skfb.ly/6Aovp) by Roberto is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/). -"Plastic Milk Crate Bundle" (https://skfb.ly/ov7Nn) by juice_over_alcohol is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/). +"Plastic Milk Crate Bundle" (https://skfb.ly/ov7Nn) by juice_over_alcohol is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/). "Compostable Burger Box .::RAWscan::." (https://skfb.ly/onGUV) by Andrea Spognetta (Spogna) is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/). @@ -1082,148 +1076,143 @@ Will need to credit the authors of the models: ### OO Graphics Dasein with Iulia -Iulia came for the weekend to work on the visual! We spent a full-immersion graphic design visual dasein weekend to glue everything together and the results are nice. +Iulia came for the weekend to work on the visual! We spent a full-immersion graphic design visual dasein weekend to glue everything together and the results are nice. -For the screens we decided on a physical UI, condensing everything into a 3D quad inserted into the scene with the other objects. +For the screens we decided on a physical UI, condensing everything into a 3D quad inserted into the scene with the other objects. For the website we went for the same big square concept and used the same pallette. Total black and blue. Grazie Ragazzi & Forza Atalanta. -- [see figma](https://www.figma.com/file/zrnQcyBnAPtOIEojzxHd9L/OOC?node-id=2130%3A497) -+ -- Timelapse - +- [see figma](https://www.figma.com/file/zrnQcyBnAPtOIEojzxHd9L/OOC?node-id=2130%3A497) + +* + +- Timelapse + TODO: img ### Varia - -- semantic versioning for the title? https://semver.org/ + +- semantic versioning for the title? https://semver.org/ ## Aqua Planning TODO: -- load assets in a clever way (load once and then reuse?) - -- scenes - - intro: - - fix assets - - presence: draft - - link to users did it - - implement audio system - - rythm: draft - - link to users - - implement transformations - - how things are scattered around ??? - - pov performer - - space: - - link to users - - implement instancing - - pov performer - - outro: - - object - - text on textures -- organize scenes -- timeline - -- web - - user interaction - - sound sprites ? - - custom domain - - essay - - - -- server - - pickup behaviour - - !!! optimization !!! - -- scene manager: - - vvvv - - server - -- user <----> 3d models? - - how to instantiate - - +- load assets in a clever way (load once and then reuse?) + +- scenes + - intro: + - fix assets + - presence: draft + - link to users did it + - implement audio system + - rythm: draft + - link to users + - implement transformations + - how things are scattered around ??? + - pov performer + - space: + - link to users + - implement instancing + - pov performer + - outro: + - object + - text on textures +- organize scenes +- timeline + +- web + + - user interaction + - sound sprites ? + - custom domain + - essay + - + +- server + + - pickup behaviour + - !!! optimization !!! + +- scene manager: + + - vvvv + - server + +- user <----> 3d models? + - how to instantiate ## 24/08/2022 -Our video shooting is going to be on Sunday 28th. We will film a bit the reharsal with Sofia and say something about the project. Need to prepare something not to look to dumb or complicate. +Our video shooting is going to be on Sunday 28th. We will film a bit the reharsal with Sofia and say something about the project. Need to prepare something not to look to dumb or complicate. -Finally decided to approach the assets problem: how to load an incredible amount of 3d models and materials in the patch? The answer is: via the Stride Game Studio. -I don't like it, but it works fine and give us less problem of import and loading, since every asset is being compiled and pre-loaded at the opening of the patch. Or something like that, im not super sure. +Finally decided to approach the assets problem: how to load an incredible amount of 3d models and materials in the patch? The answer is: via the Stride Game Studio. +I don't like it, but it works fine and give us less problem of import and loading, since every asset is being compiled and pre-loaded at the opening of the patch. Or something like that, im not super sure. So these are the specifics to load things: -- Stride Game Studio 4.0.1.1428 (the same that vvvv is using, see in the About panel or in the dependencies) -- Model are imported in OBJ format to keep the mesh separated. -- Merge Meshes in the model panel should be checked, otherwise vvvv will crash. -- LoadProject node in vvvv and select the .csproj of the game. +- Stride Game Studio 4.0.1.1428 (the same that vvvv is using, see in the About panel or in the dependencies) +- Model are imported in OBJ format to keep the mesh separated. +- Merge Meshes in the model panel should be checked, otherwise vvvv will crash. +- LoadProject node in vvvv and select the .csproj of the game. Things can be organized in scenes, that could be an interesting way to deal with the different interaction moments. Let's see. - - TODO: -- Text for sunday -- Some nice and funny desktop setup to show that we are working with technology 👩‍💻 -- Import models and material in the stride project -- Refactor vvvv patch to work with the stride project -- Do we want to print the essay? On the AH maps? + +- Text for sunday +- Some nice and funny desktop setup to show that we are working with technology 👩‍💻 +- Import models and material in the stride project +- Refactor vvvv patch to work with the stride project +- Do we want to print the essay? On the AH maps? ## 25/08 -Yesterday night loaded the first batch of objects into Stride. It required a bit of time to setup a proper workflow. +Yesterday night loaded the first batch of objects into Stride. It required a bit of time to setup a proper workflow. -This morning refactored the patch to work with assets from the Stride Project. Now there is an Object node that takes the name of the model and return an entity with the various parts of the 3D object as children, with the right materials etc. -In this way we can set individual transform to the elements and decompose the objects in pieces. +This morning refactored the patch to work with assets from the Stride Project. Now there is an Object node that takes the name of the model and return an entity with the various parts of the 3D object as children, with the right materials etc. +In this way we can set individual transform to the elements and decompose the objects in pieces. next for today: -- new 3d objects : download - break - import DONE - -- [see Iulia's list](https://docs.google.com/document/d/1S_onDYOahGX7wabcGuFxTBczhMXY6jWBD0-MGabABNY/edit?usp=sharing) +- new 3d objects : download - break - import DONE +- [see Iulia's list](https://docs.google.com/document/d/1S_onDYOahGX7wabcGuFxTBczhMXY6jWBD0-MGabABNY/edit?usp=sharing) OK SELECTA 3D Objects -- [VLC Media Player](https://sketchfab.com/3d-models/traffic-cone-game-ready-8cdac192abd044988c314eeb6377a3a0) -- [Warning Panel](https://sketchfab.com/3d-models/warning-panel-1a15ec7e2a3f446cb4bd9dde807baac5) -- [Traffic Signals](https://sketchfab.com/3d-models/8-inch-ge-dr6-traffic-signals-64eccd680a254434b329a6a32aaa571f) -- [Turnstile](https://sketchfab.com/3d-models/simple-turnstile-b0fb9ddd343544a681ae108591bf0277) +- [VLC Media Player](https://sketchfab.com/3d-models/traffic-cone-game-ready-8cdac192abd044988c314eeb6377a3a0) +- [Warning Panel](https://sketchfab.com/3d-models/warning-panel-1a15ec7e2a3f446cb4bd9dde807baac5) +- [Traffic Signals](https://sketchfab.com/3d-models/8-inch-ge-dr6-traffic-signals-64eccd680a254434b329a6a32aaa571f) +- [Turnstile](https://sketchfab.com/3d-models/simple-turnstile-b0fb9ddd343544a681ae108591bf0277) +- [Wooden Crate](https://sketchfab.com/3d-models/wooden-crate-e12fde1101b84b6da40056336a7b309d) -- [Wooden Crate](https://sketchfab.com/3d-models/wooden-crate-e12fde1101b84b6da40056336a7b309d) +- [Office Knife](https://sketchfab.com/3d-models/an-office-knife-ce5a4c19178a44b38c1fb8d8c9d18ee5) +- [Cutting pliers](https://sketchfab.com/3d-models/cutting-pliers-534ca221c6ea482d9f9bf13478022c32) +- [Power Plug](https://sketchfab.com/3d-models/power-plug-outlet-adapter-connector-strip-69b6c77ec76b4ce7a4a79067b3f0e316) +- [Microwave](https://sketchfab.com/3d-models/microwave-oven-5af99db17fb645bba7e55ace9e7b6b34) +- [Surveillance Cam](https://sketchfab.com/3d-models/camera-7a937b80d13a44698ec605872d126f5f) +- +- [Tier Scooter](https://sketchfab.com/3d-models/tier-scooter-9479a6d3f1474971afde26b33fe96c55) +- [Bike](https://sketchfab.com/3d-models/bike-version-01-c10d7de60f3c4df9a3a8f892aacf18cd) +- [Bike Stand](https://sketchfab.com/3d-models/cc0-bicycle-stand-4-597656d383f94bca8fa098905a126ee1) -- [Office Knife](https://sketchfab.com/3d-models/an-office-knife-ce5a4c19178a44b38c1fb8d8c9d18ee5) -- [Cutting pliers](https://sketchfab.com/3d-models/cutting-pliers-534ca221c6ea482d9f9bf13478022c32) -- [Power Plug](https://sketchfab.com/3d-models/power-plug-outlet-adapter-connector-strip-69b6c77ec76b4ce7a4a79067b3f0e316) -- [Microwave](https://sketchfab.com/3d-models/microwave-oven-5af99db17fb645bba7e55ace9e7b6b34) -- [Surveillance Cam](https://sketchfab.com/3d-models/camera-7a937b80d13a44698ec605872d126f5f) -- - - -- [Tier Scooter](https://sketchfab.com/3d-models/tier-scooter-9479a6d3f1474971afde26b33fe96c55) -- [Bike](https://sketchfab.com/3d-models/bike-version-01-c10d7de60f3c4df9a3a8f892aacf18cd) -- [Bike Stand](https://sketchfab.com/3d-models/cc0-bicycle-stand-4-597656d383f94bca8fa098905a126ee1) - - - - -- refactor presence with assets first thing in the morning and then -- refactor rythm with assets (actually implement? dont remember if it's already there) +- refactor presence with assets first thing in the morning and then +- refactor rythm with assets (actually implement? dont remember if it's already there) (it's still a bit clunky the way to apply the instancing transfrom to the single element of the 3d model but let's see) (maybe it's enough to implement it in a modular way so the two scenes work with different nodes?) -- maybe now: - - adjust bio & pic for website DONE - - think to the interview on sunday WIP +- maybe now: + - adjust bio & pic for website DONE + - think to the interview on sunday WIP ``` -Francesco Luzzana is a digital media artist from Bergamo, Italy. +Francesco Luzzana is a digital media artist from Bergamo, Italy. Francesco Luzzana (he/him) develops custom pieces of software that address digital complexity, often with visual and performative output. He likes collaborative projects, in order to face contemporary issues from multiple perspectives. His research aims to stress the borders of the digital landscape, inhabiting its contradictions and possibilities. He graduated in New Technologies at Brera Academy of Fine Arts and is currently studying at the master Experimental Publishing, Piet Zwart Institute. ``` @@ -1239,42 +1228,38 @@ now for the interview: two way binding: -attuning to the choreography of objects moved by digital platforms to grasp their +attuning to the choreography of objects moved by digital platforms to grasp their _modality_ -- an interactive performance of contemporary dance -- with a performer inside the VR -- connected with the public via smartphone -- that can transform the space in which she moves +- an interactive performance of contemporary dance +- with a performer inside the VR +- connected with the public via smartphone +- that can transform the space in which she moves _contents_ -- -- last mile logistics and the very body of supply chain -- used as interface between our daily lifes and the accidental megastructure of digital platforms -- object oriented onthology and object oriented programming - - -_for the shooting:_ +- +- last mile logistics and the very body of supply chain +- used as interface between our daily lifes and the accidental megastructure of digital platforms +- object oriented onthology and object oriented programming -- sofia POV in 3d space with a lot of objects and that's it -- a bit of vvvv (ws receiver and factory?) and a bit of vscode (ws server) -- smartphone interaction? not sure but could be useful +_for the shooting:_ +- sofia POV in 3d space with a lot of objects and that's it +- a bit of vvvv (ws receiver and factory?) and a bit of vscode (ws server) +- smartphone interaction? not sure but could be useful - -Also today I got a mail from Leslie 💌 and look at here: +Also today I got a mail from Leslie 💌 and look at here: [pzwiki.wdka.nl/mediadesign/Calendars](https://pzwiki.wdka.nl/mediadesign/Calendars:Networked_Media_Calendar/Networked_Media_Calendar/08-09-2022_-Event_1) Mom im famous im in the PZI Wiki! todo: send pic to mic +## 28/08 - 31/08 First reharsal with Sofia -## 28/08 - 31/08 First reharsal with Sofia - -timeline: +timeline: - intro loop - website: username and confirm @@ -1282,7 +1267,7 @@ timeline: - transition fade out and music starts - presence - ambient light off 0 - - point light off 0 + - point light off 0 - website: waiting room - sofia faces the screen, back to the public - slowly turns @@ -1291,7 +1276,7 @@ timeline: - website: presence button, sound notification - point light on, text posi - website: presence interaction - - ambient light on 1 slow transition --> from 7:30 ~ to 8:00 (.30 min) super ease in + - ambient light on 1 slow transition --> from 7:30 ~ to 8:00 (.30 min) super ease in - swap sock transition (--)--> )() --> from 8:20 ~ to 10:30 (2:00 min) ease-in circ ~ ease-in-out circ ??? >>> stop in the middle and then explode at 10:40 !!!! setup light shaft ? - rythm @@ -1301,35 +1286,50 @@ timeline: - 13:00 force all smartphones white, disable interaction - 13:10 --> 14:10 transition to performer POV (1 min) - linear ease-out - point light on ambient light off directional light off - 16:00 --> ambient light on .125 directional light on 1 transition - - 16:45 --> audio queue to transition + - 16:45 --> audio queue to transition - 17:00 --> obj reconstruction fluido ~:40 min not interactive - 17:40 --> transition to general view, linear, 1 min - space - website: space interaction 18:30 - 18:40 zoom out transition endless super slow 100 m away - 21:00 - - website: text the zone is the zona as 20:50 - - + - website: text the zone is the zona as 20:50 ## The blurb & about object orientation - Object Oriented Choreography proposes a collaborative performance featuring a dancer wearing a virtual reality headset and the audience itself. At the beginning of the event, the public is invited to log on to o-o-c.org and directly transform the virtual environment in which the performer finds herself. The spectators are an integral part of the performance and contribute to the unfolding of the choreography. -The work offers an approach to technology as moment of mutual listening: who participate is not a simple user anymore, but someone who definitely inhabits and creates the technological environment in which the performance happens. The performer not only is connected with each one of the spectators, but acts also as a tramite to link them together. In this way the show re-enacts and explores one of the paradigm of our contemporary world: the Zone. +The work offers an approach to technology as moment of mutual listening: who participate is not a simple user anymore, but someone who definitely inhabits and creates the technological environment in which the performance happens. The performer not only is connected with each one of the spectators, but acts also as a tramite to link them together. In this way the show re-enacts and explores one of the paradigm of our contemporary world: the Zone. The Zone is an apparatus composed of people, objects, digital platforms, electromagnetic fields, scattered spaces, and rhythms. An accidental interlocking of logics and logistics, dynamics and rules that allow it to exist, to evolve, and eventually to disappear once the premises that made it possible come undone. The Zone could be an almost fully automated Amazon warehouse as well as the network of shared scooters scattered around the city. It could be a group of riders waiting for orders outside your favorite take-away, or a TikTok house and its followers. -The research that OOC develops is influenced by the logistic and infrastructural aspect that supports and constitutes this global apparatus. The title itself of the work orbits a gray zone between the theoretic context of Object Oriented Ontology and the development paradigm of Object Oriented Programming. Moving through these two poles the performance explores the Zone: both with the categories useful to interact with hyperobjects such as massive digital platforms, and across the different layers of the technological Stack, with a critical approach to software and its infrastructure. A choreography of multiple entities in continous development. - -## 06/09 +The research that OOC develops is influenced by the logistic and infrastructural aspect that supports and constitutes this global apparatus. The title itself of the work orbits a gray zone between the theoretic context of Object Oriented Ontology and the development paradigm of Object Oriented Programming. Moving through these two poles the performance explores the Zone: both with the categories useful to interact with hyperobjects such as massive digital platforms, and across the different layers of the technological Stack, with a critical approach to software and its infrastructure. A choreography of multiple entities in continous development. -- catchup system -- safari audio seems ok -- reconnect when socket state is closed +## 06/09 -- white -- rythm interaction -- move around -- switch pov to global sseems off -- ambient light 7:45 \ No newline at end of file +- catchup system +- safari audio seems ok +- reconnect when socket state is closed + +- white +- rythm interaction +- move around +- switch pov to global sseems off +- ambient light 7:45 + +## Recap 4 december + +- Web: + - client: choose youur object, accessibility, i18n multi-lang, link to previous versions (longform is actually hosted at v2.o-o-c.org), about and better general info + - server: better users management for credits +- Projection: + - objects material improvement + - enhance background colors + - adjust lines opacity for interaction + - names for rythm interaction? + - better object positioning (no reshuffle user join or leave) +- Sound: + - sound design for smartphone, for presence and ending + - OST in v4 timeline? +- Space: + - light design, space setup & projection + - research on [addressable led strip](https://www.gindestarled.com/the-ultimate-guide-to-choosing-the-right-addressable-led-strip/)? are they cheap? can we use the next performance to test a small led setup?