You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

781 lines
36 KiB
Markdown

2 years ago
---
categories:
- Performance
- Web
- VR
date: 07/07/2022
description: Dev log for Object Oriented Choreography
slug: ooc-summer-session
2 years ago
title: OOC - Summer Session - V2
2 years ago
cover: ooc.jpg
cover_alt: ooc but is a workshop
2 years ago
---
2 years ago
## Workshop?
The third iteration of OOC could be two-parts workshop: in the first moment participants co-design a VR environment with a custom multiplayer 3D editor. The second half is a dance workshop with a choreographer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a catalyst, transformed from a single-user device to a wandering and plural experience.
2 years ago
1. design the VR environment together
* to encode information with spatial and body knowledges
* to achieve meaningful and expressive interactivity through participation
* to take into accounts multiple and situated points of view
2 years ago
* starting point is the essay on the Zone written for the previous iteration
* [the essay](https://www.neroeditions.com/object-oriented-choreography/)
* excerpts used as prompt, focus on using space as an interface with the body
2 years ago
2. explore the collective VR environment
* to decode information with spatial and body knowledges
* to transform vr in a shared device
* who is inside the VR trusts the ones outside
* the ones outside the VR take care of who's inside
2 years ago
* performative workshop
* stretching and warming up
* excercises: moving with hybrid space
* improvisation
2 years ago
### Outcomes:
2 years ago
1. documentation of the workshop
2. a different 3d environment for each iteration of the workshop i.e digital gallery ?
3. the 3D editor
2 years ago
## first part - design the VR environment
2 years ago
* how? with a custom 3D editor
* a kind of tiltbrush?
* super simple editor, limited functionality
* work with volumes, maybe images (texture)? maybe text?
2 years ago
* how do we deal with the multiplayer aspect ?
* how do we deal with the temporality of creation?
* how can participants collaborate together if there is only 1 VR system?
2 years ago
* take into acocunts that our VR system is:
* headset
* 2 controllers
* 3 motion trackers
2 years ago
* think to collaborative uses of these six pieces of hardware
* mixed editor ?
* accessible from outside the vr system ?
* like a web interface and a vr interface ? multiplayer with different kind of access and functionality ?
* like vr for the volumes and web for images and text ?
2 years ago
* how techically ?
* VR interface
* vvvv [https://visualprogramming.net/](https://visualprogramming.net/)
* [https://www.stride3d.net/](https://www.stride3d.net/)
* openVR
2 years ago
* for modelling volumes: dynamic meshes with marching cubes
* for UI: [https://github.com/ocornut/imgui](https://github.com/ocornut/imgui)
2 years ago
* Web interface
* three.js
* vue.js
2 years ago
* references
* (VR) 3d editor research
* [https://www.tiltbrush.com/#get-it](https://www.tiltbrush.com/#get-it)
* [https://openbrush.app/](https://openbrush.app/)
* [https://www.kodon.xyz/#faq](https://www.kodon.xyz/#faq)
* [https://masterpiecestudio.com/](https://masterpiecestudio.com/)
* [https://www.adobe.com/products/medium.html](https://www.adobe.com/products/medium.html)
2 years ago
* see
* terraforming - sebastian lague [https://www.youtube.com/watch?v=vTMEdHcKgM4](https://www.youtube.com/watch?v=vTMEdHcKgM4), for volumes modelling
* my inner wolf - studio moniker [https://studiomoniker.com/projects/myinnerwolf](https://studiomoniker.com/projects/myinnerwolf), for the multiplayer work with images and textures
2 years ago
### what's the plan:
2 years ago
- transition from performance to workshop
- participative forms of interaction ?
- simplify what's there already
2 years ago
### what's the point?
2 years ago
- make sense together of a complex, contradictory system such as : the massive digital infrastructure
2 years ago
- what does it mean: to make sense together? to accept the limits of our own individual description and join others to have a better view (renegotiation of complexity?)
2 years ago
### what are our roles here?
2 years ago
- facilitators?
- to provide some tools and a context around them
- which kind of tools
2 years ago
## Mapping the algorithm
2 years ago
Our technological environment is made of abstract architectures built of hardware, software and networks. These abstract architectures organize information, resources, bodies, time; in fact they organize our life. Yet, they can be really obscure and difficult to grasp, even to imagine.
2 years ago
Within VR we can transform these abstract architecture into virtual ones: spaces that are modelled on the nature, behaviour, and power relations around specifc technologies. Places that constraint the movements of our body and at the same time can be explored with the same physical knowledge and awareness.
2 years ago
Starting from one specific architecture we model and map it together with the public.
2 years ago
This iteration of OOC is a performance with the temporality of a two-parts workshop: in the first moment participants model together the virtual environment with a custom VR editor, that let them create the space in 1:1 scale.
2 years ago
The second half is a performative workshop with a choreographer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a way to tune in and out the virtual space, transformed from a single-user device to a wandering and plural experience.
2 years ago
Since an abstract architecture is composed of several entities interacting together, the dramaturgical structure con be written following them. The narration of the modeling workshop as well as the performative excercises from the warming up to the final improvisation can be modeled on the elements of the architecture.
2 years ago
~
2 years ago
The idea of having the public modeling the space and exploring with the performer responds to several needs:
2 years ago
- a virtal space is better experienced first hand
- meaningful and expressive forms of interaction
- making sense together of black box algorithms
- participants are aware of what's happening inside the VR and so there is no need for other visual support
2 years ago
To make an example: the first OOC was modeled on a group chat. The connected participants were represented as *clients *placed in a big circular space, *the server*. Within the server, the performer acted as the *al*gorithm, taking messages from one user to the other.
2 years ago
## Could it be done in a different way?
2 years ago
Here are three scenario:
2 years ago
### Workshop
2 years ago
* a two-parts workshop: in the first moment participants co-design a VR environment with a custom multiplayer 3D editor. The second half is a dance workshop with a choreographer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a catalyst, transformed from a single-user device to a wandering and plural experience.
* it has a particular temporality: it is not intense as a performance and the pace can be adjusted to keep everyone engaged.
* it follows the idea of lecture performance, steering toward more horizontal and collaborative way of making meaning
* it provides facilitation
* cannot be done the same day as the presentation, at least a couple of days before (less time for reharsal)
2 years ago
### Installation:
2 years ago
* There is the VR editor tool and the facilitation of the workshop is recorded as a text (maybe audio?), participants con follow it through and create the evironment while participating. the text is written with the choreographer / performer. it's a mix between the two moments of the workshop. the performer is following the same script.
2 years ago
* vr is used as a single player device, intimate experience, asmr or tutorial vibe
* probably doable up to two or three people at the same time? ( should try )
* a platform to see different results?
* how long should it be to be meaningfull for the public? at least 10 min? 15 min ?
2 years ago
### Platform:
2 years ago
* interactions and performance happen in different moments
* user generated contents
* we gather contents online and use it to build the stage for the performance
* there is a platform in which people can build space? does it make sense if it's not done with vr things ? aka involving directly the body?
2 years ago
## A draft timetable
2 years ago
* week 1 \_ 18-24 jul
* define concept
* draft scenario
* define process
* schedule and timetable
* plan outcomes
* presentation (with visual ref and examples)
2 years ago
* week 2 \_ 25-31 jul
* research and writing for workshop
* technical setup research for editor
* vr editor research and experiments
* understand logistic for workshop moments?
2 years ago
* 25 - 26:
* workshop research
* book of shaders
2 years ago
* 27
* book of shaders
* setup vr editor basic
2 years ago
* 28
* workshop research / writing
* book of shaders
* setup vr editor prototype
2 years ago
* 29
* workshop research / writing
* book of shaders
* setup vr editor prototype
2 years ago
* 30~31 buffer
* meeting with sofia
* meeting with ste \& iulia
* update log and sardegna
2 years ago
* week 3 \_ 1-7 aug
* first workshop text draft
* first working prototype for vr editor
* setup reharsal
2 years ago
* week 4 \_ 8-14 aug
* week 5 \_ 15-21 aug
* week 6 \_ 22-28 aug
* week 7 \_ 29-4 sep
* week 8 \_ 5-8 sep
2 years ago
### Sparse ideas
2 years ago
Tracker as point lights during performance (see FF light in cave)
2 years ago
### References
2 years ago
- The emergence of algorithmic solidarity: unveiling mutual aid practices and resistance among Chinese delivery workers, [read](https://journals.sagepub.com/doi/full/10.1177/1329878X221074793)
- Your order, their labor: An exploration of algorithms and laboring on food delivery platforms in China, DOI:10.1080/17544750.2019.1583676
- The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms, DOI:10.1080/1369118X.2016.1154086
- Algorithms as culture: Some tactics for the ethnography of algorithmic systems - [read](https://journals.sagepub.com/doi/10.1177/2053951717738104)
- Redlining the Adjacent Possible: Youth and Communities of Color
Face the (Not) New Future of (Not) Work [read](https://static1.squarespace.com/static/53c7166ee4b0e7db2be69480/t/5682b8071c12101f97a8b4df/1451407367281/Redlining+the+Adjacent+Possible\_DS4SI.pdf)
2 years ago
2 years ago
## An overview for Sofia:
### notes from 02/22
2 years ago
* focus on how the space influences the body
* public dont need to look at the phone all the time
* interaction from the public change the space
* performer not all the time inside the vr, there could be moments outside (ex. intro, outro)
* focus on drammaturgical development
* participant should recognize the results of their own interactions
* public need to see what the performer see (or to have another visual support) -> projection?
2 years ago
### concept
2 years ago
Our technological environment is made of abstract architectures built of hardware, software and networks. These abstract architectures organize information, resources, bodies, time; in fact they organize our life. Yet, they can be really obscure and difficult to grasp, even to imagine.
2 years ago
Being in space is something everyone has in common, an accessible language. Space is a shared interface. We can use it as a tool to gain awareness and knowledge about complex systems.
2 years ago
Within VR we can transform these abstract architecture into virtual ones: spaces that are modelled on the nature, behaviour, and power relations around specifc technologies. Places that constraint the movements of our body and at the same time can be explored with the same physical knowledge and awareness. (like what we did for the chat)
2 years ago
Starting from one specific architecture (probably the food delivery digital platforms typical of gig economy that moves riders around) we model and map it together with the public. Since an abstract architecture is composed of several entities interacting together, a strong dramaturgical structure con be written following the elements of the architecture.
2 years ago
### how to - two options
2 years ago
1. performance as a workshop
2 years ago
a performance with the temporality of a two-parts workshop: in the first moment participants model together the virtual environment with a custom VR editor, that let them create the space in 1:1 scale.
2 years ago
Then a performative workshop with a choreographer / performer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a way to tune in and out the virtual space, transformed from a single-user device to a wandering and plural experience.
2 years ago
2. performance as an installation
2 years ago
The VR editor is used as an installation. Other than the normal functionalities to model the environment it contains a timeline with the structure of the workshop recorded as audio. The performer activate the installation following the script. The text is written with the choreographer / performer. it's a mix between the two moments of the workshop version descripted before. After the performance, participants (up to three at the same time) can follow the audio it and being guided in the creation of the environment.
2 years ago
2 years ago
~
2 years ago
Both options can be activated multiple times, with different results. The resulting 3D environments can be archived on a dedicated space (like a showcase website) in order to document, (communicate, and $ell the project again for further iterations)
2 years ago
# BREAKING CHANGES HERE
2 years ago
### Meeting with Sofia and Iulia
2 years ago
**ok ok ok no workshop let's stick to what we have and polish it**
2 years ago
- interaction from the public change the space
- facilitate access to the website
- website: intro, brief overview
- at the beginning the performer is already there, idle mode
- (vertical) screen instead of projection ?
- from the essay to something more direct
2 years ago
~
2 years ago
- building block:
* text
* interaction
* space modification
2 years ago
```- - [ ] - - [ ] - - [ ] - ->```
2 years ago
### what do we need:
2 years ago
- timeline
- model for the application, a series of blocks like this:
* text
* duration
* interaction
* scene
2 years ago
## 28/7 - Prototype setup
2 years ago
### app design
* vvvv client
* VR system
* timeline
* scenes
* text
* interaction
* space
* soundtrack
* cms
2 years ago
* web client
* pages
* main
* interaction
* sound notification
* about
* i11n
2 years ago
* web server
* websocket server
2 years ago
### small prototype:
2 years ago
* vvvv can send scenes to the server
* datatype:
* text
* interaction
* type
* counter
* xy
* context
* description
* etc
2 years ago
## 29/7 - Prototype Setup \& other
2 years ago
The building block is the Stage. Each stage is a description of what's happening at the edge of the performance: what the screen is displaying, what's inside the VR, what's happening on users' smartphone.
2 years ago
We can place a series of stages on a timeline and write a dramaturgy that it's based on the relation between these three elements.
2 years ago
The model of the *stage* is something like this:
2 years ago
* text
* scene
* interaction
* type
* context
2 years ago
- `text` is the text that is gong to be displayed on the screen
- `scene` contains some info or setup for the scene in the vr environment
- `interaction` holds the type of the interaction and other additional info stored as a context
2 years ago
`text` and `scene` are meant to be used in vvvv to build the vr environment and the screen display
2 years ago
`interaction` is meant to be sent via websocket to the server and from there to the connected clients
2 years ago
it could be useful to keep track of the connected users.
2 years ago
It could be something like:
2 years ago
1. when someone access the website a random ID is generated and stored in the local storage of the device, in this way even if the user leaves the browser or refresh the page we can retrieve the same ID from the storage and keep track of who is who without spawning new user every time there is a reconnection (that with ws happens a lot!)
2. maybe the user could choose an username? it really depends on the kind of interaction we want to develop. also i waas thinking to ending credits with the participation of and then the list of users
3. when connecting and choosing and username, the client sends it to the server that sends it to vvvv, that stores the users in a dictionary with their ID. Every interaction from the user will be sent to the server and then vvvv with this ID, in this way interactions can be organized and optimized, as well linked to the appropriate user.
4. tell me more about surveillance capitalism
2 years ago
### about text - interaction
2 years ago
even if we can take out excerpts from the essay we wrote, this reading setup is totally different. here our texts need to be formulated like a call to action, or a provocation to trigger the interaction.
2 years ago
a way to acknowledge the public
2 years ago
## 31/07 - Prototype setup: vvvv
2 years ago
The websocket implementation im using is simple. It just provides this kind of events:
2 years ago
- on open (when a client connect)
- on close (when a client disconnect)
- on message (when there is a message incoming)
2 years ago
In order to distinguish between different types of message I decided to serialize every text as a JSON string with a field named *type*. When a message event is fired the server goes and look at the type of the message and then acts consequently. Every message triggers a different reaction aka it calls a different function.
2 years ago
In the previous versions the check on the message type was a loong chain of if statements, but that didn't feel right, so I searched a bit how to manage it in a better way.
2 years ago
In the server (node.js) i created an object that uses as keys the type of the message and as value the function associated. [javascript switch object](https://www.30secondsofcode.org/articles/s/javascript-switch-object)
2 years ago
For vvvv I asked some suggestion in the vvvv forum and ended up using a simple factory pattern that uses a common interface IMessage and then use it to process the incoming message based on the type. [replacing long if chain](https://discourse.vvvv.org/t/replacing-loong-if-chain/20707/3)
2 years ago
In order to deal with the state of the application (each message operate in a different way and on different things) I created a Context class that holds the global state of the performance such as the websocket clients, and the connected users. The IMessage interface take this context as well as the incoming message and so it can operate on the patch.
2 years ago
happy with it! it's much more flexible than the long if snake
2 years ago
## 1-2/08 - two Displays & Prototype setup
2 years ago
Yesterday together with Richard we setup the two screens to show what's happening inside the VR for the public. Initially they were mounted next to each other, in vertical.
2 years ago
With Iulia we thought how to place them. Instead of keeping them together probably it would be better to use them at the edge of the interactive zone. Even if the screen surface seems smaller, it's a creative constraint \& it creates more the space of the performance.
2 years ago
Ideallly the viewer can see at the same time both screens and the performer.
The screens can display either the same or different things.
2 years ago
And now some general thoughts:
2 years ago
the username should be central in the visualization of the interaction, since it's the main connection point between between whats happening outside and inside? could it be something different than a name? could it be a color? using a drawing as an avatar?
2 years ago
### types of interaction
2 years ago
the idea of presence, of being there, together and connected
2 years ago
`touching the screen <---means to be connected with ---> the performer`
2 years ago
keep touching to be there
2 years ago
a light in the environment
2 years ago
and when the performer gets closer to the light the connected phone plays a notification
2 years ago
maybe it could be enough ?
2 years ago
just use the touchscreen as a pointer xy and make the nature of the pointer changes
2 years ago
## 3/08 - Prototype Setup and doubts
2 years ago
Finished to setup the xy interaction with the clients and vvvv.
2 years ago
The setup with nuxt is messy since it's stuck between nuxt 2 and vue 3. There are a lot of errors that don't depend on the application but rather to the dependencies and it's really really annoying, especially since it prevents solid design principles.
2 years ago
I'm tihnking to rewrite the web app using only Vue, instead of nuxt, but im a bit afraaaaaidd.
2 years ago
## 4/08 - Script
2 years ago
Im trying to understand which setup to use to rewrite the application without nuxt. Currently im looking into fastify + vite + vue, but it's too many things altogether and im a bit overwhelmed.
2 years ago
So now a break and let's try to list what we need and the ideas that are around to organize the work of the next week.
2 years ago
### Hardware Setup:
2 years ago
- 2 vertical displays, used also as vive basestation support, place at opposite corners of the stage
- PC with Vive connection in the third corner
- Public stands around
2 years ago
### Performance Structure
2 years ago
**0. before the performance**
2 years ago
* *the two screens loops:*
* Object Oriented Choreography v3.0 (with versioning? it's funny)
* Connect to the website to partecipate to the performance
* o-o-c.org
2 years ago
* *website:*
* access page (choose a name or draw a simple avatar)
* waiting room with short introduction: what is this and how does it work. In 2 sentences.
2 years ago
* *stage:*
* performer in idle mode, already inside the vr
* user connected trigger minimal movements?
2 years ago
* *sound:*
* first pad sloowly fade in ?
2 years ago
**1. performance starts, first interaction: touch**
2 years ago
* *two screens:*
* direct feedback of interaction
* representation of the space inside the vr
* position of the performer inside the vr (point light?)
2 years ago
* *website:*
* touch interaction. users are invited to keep pressed the touchscreen.
* a sentence to create context aroud the interaction? maybe not, because:
* to interact the user doesn't need to look at the phone, it's more an intuitive and physical thing
2 years ago
* *stage:*
* performer st. thomas movement to invite for the touch interaction
* the public is invited to follow the performer ? (for example releasing the touch improvisely, some kind of slow rythm, touch pattern, explore this idea as introduction?)
2 years ago
* *sound:*
* ost from PA
* interaction from the phones
2 years ago
* *interaction:*
* every user is an object in the VR space, placed in a random-supply-chain to build a meaningful space for the choreography. the object is visible only when the user is touching the screen.
* the performer can activate these objects by getting closer
* when an object is activated it sends a notification to the smartphone of the user, that play some sound effect
* build on this composition
2 years ago
* **bonus**: the more the user keep pressed, the bigger the object grows? so it's activated more frequently and this could lead to some choir and multiple activations at the same time?
2 years ago
```[]need a transition[]```
2 years ago
**2. second interaction: XY**
2 years ago
* *two screens:*
* one screen show the representation of the space seen from the outside, kinda aerial view
* one screen show focus on one user at the time
* for object to go aroud think to the kind of animations of everything for example
2 years ago
* *website:*
* touch xy interaction
* double tap to recognize which one are you? with visual feedback like hop. maybe not necesary
2 years ago
* *stage:*
* for sure the beginning of the interaction will be super chaotic, with everyone going around like crazy.
* the goal could be to go from this initial chaos to some kind of circular pattern, that seems the most iconic and easy thing
* the performer invites to circular movements, growing in intensity.
* actually this could be a great finale, using the same finale of the last iteration
2 years ago
* *sound:*
* ost from PA
* focus notification (the smartphone rings when the user is in focus on the screen)
2 years ago
* *interaction:*
* users are invited to use the touchscreen as a trackpad, to move into the space.
* how not to be ultra chaotic from the start? or:
* how to facilitate this chaos toward something more organic
2 years ago
*would be nice to have a camera system that let you position the camera in preview mode and then push it to one of the screens, overriding the preset*
2 years ago
## 5-08
2 years ago
*Notes from the video of OOC@Zone Digitali. The name of the movements refer to the essay triggers.*
2 years ago
### list of triggers:
2 years ago
- *Is performer online?*
is great for the beginning. It could start super minimal and imperceptible, transition from the idle mode to the beginning of the performance, with slowly increasing intensity
2 years ago
- *San Tommaso*, *Janus*
StThomash could be an opening, for an explicit invitation to the touch interaction. hold the position and insist.
2 years ago
also
2 years ago
Janus' looking around and searching could be the reaction when someone connect and is placed in the space. When someone touch, look at them in the virtual place and then stay. it's a first way to create the impression of the environment that surround the performer
2 years ago
`↓↑`
2 years ago
- *Fingertips*, *Scribble*
are a good way to elaborate on the idea of touch interaction. focus on fingers as well as focus on the surface those fingers are sensing. Bring new consistency to the touchscreen, transform its flat and smooth surface to something else.
2 years ago
- *Perimetro*, *Area*
Nice explorative qualities.
Could be used for notification composition during the first interaction?
After the invitation, a moment of composition.
2 years ago
2 years ago
~
2 years ago
- *Tapping*, *and that floor moment with the body stretching *
floor movements for a second part ? between interaction touch and xy
2 years ago
- *Logic & Logistic*, *Efficiency*
Stationary movement that could introduce the performer point of view. The body is super expressive and the head is still, so the point of view in the VR is not crazy from the start.
2 years ago
- *Knot*, *Velocity*
The stationary movement could then start traversing more the space, integrating also the quality and intensity of efficiency and velocity.
2 years ago
2 years ago
~
2 years ago
- *Scrolling*
could be used during the xy interaction, again as a form of invitation
- *Collective Rituals*
the final sequence that builds on a circular pattern of the xy interaction, slower and slower
- *Optical*
- *Glitch*
- *Fine*
2 years ago
Need to finish this analysis but for now here is a draft structure for the performance. Eventually will integrate it with the previous two sections: the Performance Structure and the trigger notes.
2 years ago
### Structure?
2 years ago
**I**
2 years ago
Invitation and definition of the domain: touch interaction and public partecipation
* a. invitation
* extend the extents of the touchscreen
* create a shared consistence for the screen surface
* b. composition
* explore it as a poetic device
*
2 years ago
**II**
2 years ago
????
2 years ago
**III**
from partecipation to collective ritual
2 years ago
## 6-08
Two ideas for the performance:
2 years ago
### a. Abstract Supply Chain
*--> about the space where the performer dances*
2 years ago
The space in the virtual environment resemble more an Abstract Supply Chain instead of an architectural space. It's an environment not made by walls, floor, and ceiling, but rather a landscape filled with objects and actors, the most peculiar one being the performer.
2 years ago
We can build a model that scan scales with the connection of new users. Something that has sense with 10 people connected as well as 50. Something like a fractal, that is legible at different scales and intensities.
2 years ago
Something between a map, a visualization, a constellation. Something that makes sense in a 3d environment and in a 2D screen or projection.
2 years ago
Lot of interesting input here:
[Remystifying supply chains](https://studio.ribbonfarm.com/p/remystifying-supply-chains)
2 years ago
### b. Object Oriented Live Action RolePlay (LARP)
2 years ago
*--> about the role of the public*
We have a poll of 3d object related to our theme: delivery packages, bike, delivery backpack, kiva robot, drone, minerals, rack, servers, gpu, container, etc. a proper bestiary of the zone.
Every user is assigned to an object at login. The object you are influences more or less also your behavior in the interaction. Im imagining it in a subtle way, more something related to situatedness than theatrical acting. An object oriented LARP.
How wide or specific our bestiary should be? A whole range of different object and consistency (mineral, vegetal, electronical, etc.) or just one kind of object (shipping parcels for example) explored in depth?
From here --> visual identity with 3D scan?
### The Three Interactions
2 years ago
All the interactions are focused on the physical use of touchscreen. They are simple and intuitive gestures, that dialogue with the movements of the performer.
There are three section in the performance and one interaction for each. We start simple and gradually add something, in order to introduce slowly the mecanishm.
The three steps are:
1. *presence*
2. *rythm*
3. *space*
2 years ago
*Presence* is the simple act of touching and keep pressing the screen. Ideally is an invite for the users to keep their finger on the screen the whole time. A way for the user to say: hello im here, im connected. For the first part of the performance the goal is to transform the smooth surface of the touchscreen in something more. A sensible interface, a physical connection with the performer, a shared space.
2 years ago
*Rythm* takes into account the temporality of the interaction. The touch and the release. It gives a little more of freedom to the users, without being too chaotic. This interaction could be used to trigger events in the virtual environment.
2 years ago
*Space* is the climax of the interaction and map the position on the touchscreen into the VR environment. It allows the user to move around in concert with the other participants and the performer. Here the plan is to take the unreasonable chaos of the crowd interacting and building something choreographic out of it, with the same approach of the collective ritual ending of the previous iteration.
2 years ago
**Each section / interaction is developed in two parts:**
2 years ago
- *an initial moment of invitation* where the performer introduces the interaction and offer it to the user via something similar to the functioning of mirror neurons. Imagine the movement for St.Thomas as invitation to keep pressing the touchscreen.
2 years ago
It is a moment that introduces the interaction to the public in a practical way, instead of following a series of cold instruction. It is also a way to present the temporality and the rythm of the interaction.
2 years ago
- *a following moment of composition*, in which the interactive mechanism is explored aesthetically. For *Presence* is the way the performer interact with the obejct inside the space. For *Space* is facilitating and leading the behaviour of the users from something chaotic to something organic (from random movements to a circular pattern?)
2 years ago
### Tech Update
2 years ago
Started having a look at reactive programming. Since everything here is based on events and messages flowing between clients, server and vvvv, the stream approach of reactive programming makes sense to deal with the flows of data in an elegant way.
Starting from here:
[The introduction to Reactive Programming you've been missing](https://gist.github.com/staltz/868e7e9bc2a7b8c1f754)
2 years ago
For notification and audio planning to use howler.js, probably with sound sprites to pack different sfx into one file.
[https://github.com/goldfire/howler.js](https://github.com/goldfire/howler.js)
## 10/08 and 9/08
2 years ago
### second interaction
2 years ago
*how to call the Three Interactions? TI? 3I ? III I ? ok stop*
2 years ago
it's usefull to imagine the lifecycle of the object to think about the three interactions.
```
1 presence___presence____being there
2 years ago
2 rythm______quality_____in a certain way
2 years ago
3 space______behaviour___and do things
```
2 years ago
So for what concerns the second interaction:
- could be related to the configuration of the object, a way to be more or less structured
- for example start from totally deconstructed object and gradually morph into it's normal state
- an assemblage of different parts
Following the timeline of the performance we could setup a flow of transformation for every object: at the beginning displacing randomly the object, messing around with its parts. We could gradually dampen the intensity of these transformations, reaching in the end a regular model of the object.
This transformations are not continous, but triggered by the tap of the user. They could be seen as snapshots or samples of the current level of transformation. In this way, either with high & low sample rate we can get a rich variation amount. This means that if we have a really concitated moment with a lot of interactions the transformations are rich as well, with a lot of movements and randomness. But the same remains true when the rythm of interaction is low and more calm: it only get the right amount of dynamic.
2 years ago
One aspect that worries me is that these transformation could feel totally random without any linearity or consistency. I found a solution to this issue by applying some kind of uniform transformation to the whole object, for example a slow, continous rotation. In this way the object feels like a single entity even when all its parts are scattered around randomly.
The transformation between the displaced and the regular states should take into account what I called *incremental legibility, *that is:
- progressively transform more feature (position, rotation, scale, texture, colors, etc)
- progressively decrease intensity of the transformations
2 years ago
in this way we could obtain some kind of *convergence* of the randomness.
Actually the prototype works fine just with the decreasing intensity, i didn't tried yet to transform the different features individually or in a certain order.
Also: displacing the textures doesn't look nice. It just feels broken and glitchy, not really an object.
**for what concerns the display:**
2 years ago
1. in one screen we cluster all the objects in a plain view, something like a grid (really packed i presume? it depends on the amount)
2. in the other we could keep them as they were in the first interaction, and present them through the point of view of the performer, keeping the sound notification when she gets closer and working as a close-up device.
we could also display the same thing in two screens, to lower the density of object and focus more on the relationship between the performer and the public as a whole, attuning the rythm
how this interaction interacts with the choreography? is it enough for the performer to be just a point of view?
### practical recap
2 years ago
0. **Intro**
- user logs in the website
- he can either select a 3d object or it's given one randomly
- there is a brief introduction
- there are simple instructions: volume
2 years ago
1. **Presence**
2 years ago
- *how does it work*
2 years ago
- activated by touch press. user needs to keep pressed in order to stay connected to the performer.
- when users press they appear in the virtual environment in the form of the same 3D object. the object is still, it's position in the space is defined accordingly to the Abstract Supply Chain structure.
- when the performer gets close to users, a notification is sent to their smartphone and plays some soud effects.
- every object has its own sprite of one-shot sounds
2 years ago
- *what is the relation with the performer*
2 years ago
- the performer invite the public to mimic her at the beginning with the invitation (st. thomas, scribble, fingertips)
- the disposition of objects in space offers cardinal points for the performer to traverse the space (perimetro, area,
2 years ago
- *what do we need*
2. **Rythm**
2 years ago
- *how does it work*
- activated by tap on the touchscreen
- user tapping activate some kind of process in the virtual environment
2 years ago
- *what is the relation with the performer*
- the performer tries to intercept the interactions of the public, mainly working on the intensity and rythm
2 years ago
- *what do we need*
2 years ago
3. **Space**
2 years ago
- *how does it work*
- activated by touch drag, the user interact with the touchscreen as a pointer
- user dragging moves its object in the vr space
- one screen view is from top and is 1:1~ with smartphone screens
- the other screen could be either:
* the point of view of the performer
* following one moving object
* static close-ups
2 years ago
- *what is the relation with the performer*
- the performer and the users move in the same space
- the performer tries to intercept the movements of the public, working with directions, speed and intensity
2 years ago
- *what do we need*