Mechatronic Optics

The drawing as a certain transcription of vision into operational, communicative, and instructional notation is at the very core of design. Deeply variegated and endlessly permutable in its own right, the projective drawing – of three and greater dimensions, onto planes and more complex cartographic schemes – is poised to be transformed beyond recognition with the advent of computable visual systems such as machine vision. Machine vision systems – the dynamic processing of images and video – are at the foundation of pattern recognition, spatial reconstruction, realtime scanning and a range of emerging technologies such as face recognition and autonomous vehicles. They demand new regimes of optical notation, and expose new possibilities for organizing visual knowledge automatically. At the same time the inverse of these operations – that is, dynamic and adaptive film projections – present new possibilities for the experience of space by literally stepping into these drawings. This class asks a simple question: how can the gap between human and machine representation become a space for a new kind of drawing?

In a highly tangible way, this class investigates how such technologies might transform the architectural drawing on the one hand and the dynamic spatial experience of architecture on the other. Working in groups of two, students will produce one of two types of digital drawing machines. The first type, the “seeing” machine, scans plans, images, text, spaces, or videos and extracts some visual intention from them – in the form of series of drawings or reconstituted film. The second type, the “viewing” machine, uses filmic techniques to create a dynamic projection installation that subverts conventions of depth. Each of these machines should adapt and extend conventions of existing conceptual or mechanical drawings, but elevate them to the level of programmatic and extensible system.

The course encompasses theoretical, historical, and technical content. At the heart of the course is a in investigation of how machines have mediated vision, as well as how they themselves see, through a survey of the techniques as well as their cultural function. Topics include oblique projections, map projections (and the equipment to both produce and view them), texture transformations, camera lucidas, stereoscopes, the Clavilux, panorama effects, photocollage techniques, optical distortions, varieties of lenses, and quantum light effects such as interference patterns and X-ray crystallography, or immersive experiences such as Xenakis’s polytopes or the composer Scriabin’s Prometheus. Supplementary theoretical perspectives such as Massino Scolari, Svetlana Alpers and Jonathan Crary will animate discussion.

Technical workshops will introduce students to conceptual tools such as computational morphology, erosion and dilation, shape skeletons, invariants and comparisons. Software tools will be provided, including grasshopper components developed specifically for the class for image analysis and shape detection, as well as MadMapper, the industry-leading software for image mapping. There will also be some hardware tutorials around the use of Arduino controllers specifically for image capture and projection, as well as the use of 3D scanners. Students of the class will have tutorials on and special access to the Geometry Lab’s two Universal robots to assist with dynamic scanning or projection projects, including potential development of hardware attachments for these devices.
Some familiarity with Grasshopper or scripting is a plus, but not required. What is required is a fascination for the perceptual implications of drawing and machine-mediated vision.