Networks of neural networks. Design intelligence Lab MIT, 2019-2021

NNN

Scroll↓

Description

This research effort seeks to understand and augment human communication and interaction through lighting in shared and public spaces. For almost a century now, light has become the primary interface for information transmission and communication; yet, input/output devices have remained in large part limited to private and single user scenarios, failing to engender novel social and collective human experiences. Moreover, public lighting has failed to be used for the complex kinds of communications and information displays that we typically encounter on a desktop computer or mobile phone, being for the most part relegated to acting as a backdrop to our social activities.

By focusing on the use of shared and public lighting, this research seeks to develop new technologies and interfaces that operate at architectural and city scales, such as bridges, building façades, stadiums, etc, and that will advance how we experience and use light as tool for creativity, communication, learning 

 This ‘impedance’ mismatch between creative tool and creation output has led lighting and product designers to continuously cobble together their own toolchains, which struggle to (1) take full advantage of the physical topology and unique properties of 1D displays; (2) re-use existing visual content; and (3) portray rich and symbolic content; fundamentally failing to create a common language and engender collaboration between designers.

Approach

To address this problem, we propose to continue to the second stage of Interaction with Purpose by researching and developing interfaces, tools and techniques for the creation of content for 1D displays that can be used by both amateur and professional creatives and that can support ease of entry, creative latitude, and ‘high expressive ceiling’ when creating light-based interactions and experiences.

We will focus the research on two overlapping areas:

(1)  Hardware and Software Interfaces

On the graphical interface side, we will investigate the design of tools and techniques for single and aggregate direct pixel manipulation, seeking to identify the appropriate interface metaphors and affordances that are useful to both amateurs and professional creatives. On the hardware side, we will look at interaction techniques that are based on phone capabilities, such as camera, accelerometer, light sensors, etc including new modalities such as Apple’s Ultra Wideband Spatial Awareness. This focus will make these interactions accessible to a broad range of users while also engendering group or crowd-scale level interactions (since users will share similarly ubiquitous technology stacks) that are in-situ. Specific research topics might include:

· Survey of existing tools and affordances for content mapping (D3, After Effects, Madmapper, etc)

· Tools for single axis pixel manipulation, area fill, gradients, etc.

· Copy and paste of pattern vs. hue, saturation, brightness information

· Touch vs gestural input for pixel manipulation

· Spatial awareness and directionality for user differentiation

(2) Generative and Adaptive Algorithms

To support the interface, we will research image processing, computer vision, and machine learning techniques that can allow existing image and video content to be analyzed, annotated, downsampled, and re-generated onto a 1D display, while preserving meaningful stylistic and symbolic characteristics. This form of ‘semantic spatial compression’ can help novice users create complex design and behaviors with minimal user input by leveraging existing content and tools for photo and video creation. Specific research topics and techniques might include:

● Extraction and re-application of optical flow

● High vs. low spatial filtering

● Anti-aliasing and posterization in high dot pitch luminaires

● Single-axis dithering

● Low-resolution style training and transferring

● Foreground/background re-mapping to wall washes, floods, and spot luminaires

● Minimal input modality in generative algorithm

● Color mixing in indirect, reflected lighting

This project was produced for Signify at the Design Intelligence Lab at MIT. 2021 all rights reserved

Previous
Previous

Making gestures: a personal design and fabrication system

Next
Next

Huanacu warehouse and offices