Online + Global | Workshop Dates: September 11-12
REALIGNMENTS: Toward Critical Computation
Visit the Workshops Exhibition:
As part of the REALIGNMENTS: Toward Critical Computation conference, ACADIA is pleased to offer a lineup of exciting workshops led by expert instructors from around the globe. The conference chairs have curated workshops that uniquely address the conference’s theme of distributed modes of working and collaboration, including topics such as AI, remote robotics, co-design, machine learning, and much more.The workshops will take place September 11-12. All workshops will occur online via Zoom or similar video conferencing platform.
Registration for the 2021 ACADIA Workshops is now closed. Thank you!
Note about software licenses: Unless otherwise indicated in the workshop descriptions below, workshop participants are responsible for installing the required software on their personal computers and securing all necessary licenses. In most cases, software companies provide trial licenses and/or educational licenses that will be sufficient for the duration of the workshop. Please read the software requirements for each individual workshop below.
Note about grants: This year, generous funding from Autodesk will enable us to provide scholarships and grants to students and professionals in partnership with NOMA, NOMAS, and schools of architecture in Mexico. Please see the Grants page for more information and application links.
Workshop registrations open on August 9, 8:00 AM PDT / 11:00 AM EDT / 5:00 PM CET. Registration is first come, first served. For more questions about the workshops, visit FAQ. More questions about workshops? Email 2021@acadia.org
ALL WORKSHOPS ARE APPROVED FOR 12 AIA CES CREDITS.
- Workshop 1 - Latent Morphologies: Disentangling Design Spaces
- Workshop 2 - Distributed Collaborations - KUKAcrc Cloud Remote Control
- Workshop 3 - Collaborative AI – Human + AI Form
- Workshop 4 - Augmented Architectural Details
- Workshop 5 - Building Web-Based Drawing Instruments
- Workshop 6 - Co-crochet Computing Stitches for Collective and Distributed Crocheting
- Workshop 7 - Enhancing Fungi-based Composite Materials With Computational Design And Robotic Fabrication
- Workshop 8 - Knitted Growth: Scaffolds For Living Root Spans
- Workshop 9 - Physics Towards Critical Assemblies
- Workshop 10 - The Generative Game
- Workshop 11 - Form-Finding Staircases With COMPAS CEM
- Workshop 12 - Remote Robotic Assemblies
- Workshop 13 - Generative Design and Analysis in Early-Stage Planning with Spacemaker
Latent Morphologies: Disentangling Design Spaces
Workshop Leaders:
- Daniel Bolojan (Florida Atlantic University, Assistant Professor)
- Shermeen Yousif (Florida Atlantic University, Assistant Professor)
- Emmanouil Vermisso (Florida Atlantic University, Associate Professor)
Abstract: The workshop will explore ways to connect different neural networks (i.e. CycleGAN and StyleGAN) to explore the search space of architectural inspiration. Particular semantic references will serve as input for a pre-trained network which outputs data for further investigation using another neural network. The datasets will focus on exploring various resolutions of the urban domain and assess possibilities for emerging patterns, etc through interpolative and extrapolative strategies. From a process point of view we are interested in identifying relevance between certain types of neural networks and their ability to access creative potential in a targeted and/or heuristic/open-ended fashion. Furthermore, it is important to consider the capacity of these nested workflows to alter our immersion in the design investigation by accessing a design space that is otherwise beyond the designer’s reach. By moving beyond rule-based defined design spaces, AI’s feature learning capabilities combined with the incorporation of additional inspirational sources (outside architecture), enable creative exploration within an extended space. Our perception of these expanded design possibilities is crucial because it may point to the direction of future search.
Description: The workshop will explore ways to connect different neural networks (i.e. CycleGAN and StyleGAN) to explore the search space of architectural inspiration. Particular semantic references will serve as input for a pre-trained network which outputs data for further investigation using another neural network. The datasets will focus on exploring various resolutions of the urban domain and assess possibilities for emerging patterns, etc through interpolative and extrapolative strategies. From a process point of view we are interested in identifying relevance between certain types of neural networks and their ability to access creative potential in a targeted and/or heuristic/open-ended fashion. Furthermore, it is important to consider the capacity of these nested workflows to alter our immersion in the design investigation by accessing a design space that is otherwise beyond the designer’s reach. By moving beyond rule-based defined design spaces, AI’s feature learning capabilities combined with the incorporation of additional inspirational sources (outside architecture), enable creative exploration within an extended space. Our perception of these expanded design possibilities is crucial because it may point to the direction of future search.
The premise of this workshop lies in encoding design intentionality as a continuous set of actions, rather than separate input-output tasks. Workshop participants will be exposed to at least two main artificial neural network structures and the idiosyncrasies of data curation for each type. The goal is to develop a sensibility for such automated processes by leveraging the power of AI tools and also introduce them into ways of evaluating the outcomes from these workflows.
Methods involve experimentation with multiple connected deep learning models, towards prototyping new design workflows. Testing and evaluation of experimental workflows will be pursued within a lens of process-creativity rather than product-creativity.
Distributed Collaborations - KUKAcrc Cloud Remote Control
Workshop Leaders:
- Dr. Sigrid Brell-Cokcan (Chair of Individualized Production RWTH Aachen, Association for Robots in Architecture)
- Johannes Braumann (Creative Robotics at UfG Linz, Association for Robots in Architecture)
- Karl Singline (Creative Robotics at UfG Linz)
- Dr. Sven Stumm(Robots in Architecture Research)
- Ethan Kerber (Robots in Architecture Research, Chair of Individualized Production RWTH Aachen)
Abstract:The ACADIA2021 workshop: Distributed Collaborations, takes Cloud Remote Control to the next level, allowing international participants to control robots in two different European locations. Workshop participants will remotely control robots at both the Chair of Individualized Production, RWTH Aachen, Germany, and at the Department of Creative Robotics, University of Arts and Industrial Design, Linz, Austria. It was at ACADIA2010, that Sigrid Brell-Cokcan and Johannes Braumann first introduced KUKA|prc, a new approach to parametric robotic production. Since then the robotic community has grown in unbelievable ways, built incredible structures and gathered for world class conferences from ACADIA to RobArch! 10 years later, at ACADIA2020, Robots in Architecture members Sven Stumm and Ethan Kerber introduced KUKA|crc, a new approach to remote robotic control. This innovation allows international users to collaborate closely with robots while remaining safely socially distant. Since then, we have been improving the software, adding IoT infrastructure and readying this technology for rollout around the world. KUKA|crc helps people learn about robots even if they can’t get in the lab. KUKA|crc also empowers you with a new cloud based IoT infrastructure allowing for multi robot factory setups and easy wireless integration of IoT devices and end effectors.
Description: As online collaboration becomes a new normal, it is vital that we explore new ways of physically working together, and with robots, even across great distances. By extending Cloud Remote Control to multiple locations, we internationally distribute both design and fabrication in new integrated & automated processes. To continue educating artist’s, architects, and engineer’s in an accessible exciting workshop, we propose a collaborative process where each participant assembles an element of a structure robotically. The assembled piece is then instantiated in the user’s digital twin so that the next participant can continue building the structure, each participant sequentially taking turns to design and build a spontaneous construction. This exploration will continue throughout the workshop, working together across international locations, allowing users to work on separate but related structures in unique locations from the safety of their office or home. Participants will learn how to design robotic programs, from basic motion concepts to more developed strategies for manipulation of planes and generation of tool paths. The workshop will teach you how to connect to international robots, book access, push programs and monitor processes. You will leave with a deeper understanding of robotic workspaces and learn how to optimize paths to avoid constraints and collisions. We imagine a future you can access international robots as a service, simulate robotic capacities, get early design decision support and optimize for automation. We imagine a future where you can book operate and monitor robots when you need them and where you need them, depending on project location, material specification and cell configuration.
Work with robots everywhere, from anywhere.
KUKA|crc Cloud Remore Control.
Collaborative AI – Human + AI Form
Workshop Leaders:
- Chien-hua Huang (China Academy of Art)
- Zach Beale (University of Applied Arts Vienna)
Abstract:This workshop explores the intersection of game and AI as a novel way to approach architectural participatory generative design. The recent rapid advancement of machine learning and ai in architectural industries is operating with limitation to allow a wider audience or human perception. As architects and designers, allowing a wider spectrum of evaluation will be vital in the design process. Inspired by machine learning (ML) application in Unity3D as game design elements, there are potentials to promote the active participation of architects, artists and even the public in the design generation processes by gamification. In this workshop, we will explore novel generative design methodology driven by Reinforcement Learning combined with player active interaction, focusing on the actor-aware generation of logics and complex spatial perceptions. How can a machine think creatively? What may be the possible form generated from the active collaboration between AI and players? What are the machine-made elements that become unperceivable from a human? These questions approach the goals of AI proponents concerning the new generation of AI that would help designers through the novel augmentation of machine vision and automation. In the workshop, participants will work with a given set of reinforcement-learning-based frameworks and package in Unity3D to explore design ideas and potentials of perception-aware human-machine interaction. Eventually, we will approach collective layers of design generation through human and AI collaboration.
Description: In the near future of our industry, our context and ways of practice are to be in the constant shift by the constantly re-interpretation of data models, from natural/human to artificial/machine, and machine-assisted augmentation in design and human perception. This workshop aims at constructing interrelation between AI and human design thinking, articulation, AI-informed-form-finding, and human-machine interactions in the future of design production Often works that interest themselves with machine learning are designed and visualized from an alien perspective: that of an object on an empty field, and seen from far above, outside of the human field of view, roughly 1.8m off the ground. This has the side effect of making the resultant works of the unique combination of Humans and AI difficult to understand from the perspective of the everyman: how does one see oneself in space if there is no sense of scale, and no interaction with the perspectival nature of the eye? Once these spaces are visualized from the first person, they often feel discordant, and under designed. The machine and human have operated on the macro scale, leaving the micro scale off of the drawing board, as it were. What if we, instead, operate on this micro scale? Instead of forming whole architectures, focus on the minutia, the single experience of a space? Thus, this workshop will apply Reinforcement learning in Unity (ML-Agents) and Unity’s gameplay function as the technique frameworks. we will use state-of-the-art techniques in Unity to generate methodologies for the participatory design that involves both human player and AI-agent. We will explore the methods and meaning of RL in design practice involving subjective factors. How can a machine think creatively? What may be the possible form generated from the active collaboration between AI and players? What are the machine-made elements that become unperceivable from a human? These questions approach the goals of AI proponents concerning the new generation of AI that would help designers through the novel augmentation of machine vision and automation.
Methods: Establish workflows during the technique workshops for tool-based exploration and experiment in game design environment through Unity3D and its package ML-agents. Extensively explore the potential of AI and its interaction through framed problems and discussion. E.g. we can set up a quick web game to be tested by other participants and guests. Evaluate design results by investigating the effectiveness of human-machine collaboration and translating the model to other platforms.
Learning Outcome:
- Understand the fundamental concepts behind AI-informed design (especially Reinforcement learning)
- Know programming and operational concepts of RL and gameplay in Unity3D
- Understand the potentials, constraints and know-how to develop an interactive application for design articulation
Augmented Architectural Details
Workshop Leaders:
- Jeffrey Anderson (Pratt Institute, Graduate Architecture and Urban Design Program)
- Ahmad Tabbakh (Pratt Institute, Center for Experimental Structures)
Abstract: Augmented Reality (AR) and real time rendering are becoming important technologies for creating remote social experiences, visualizations, and training simulations in a number of fields. In the architecture industry, this technology not only offers a new medium for creating visual assets but opens up possibilities for new forms of communication and coordination between various trades including architects, contractors, and clients through facilitating spatially calibrated digital interactions. In order to speculate on these possibilities, our workshop will explore the idea of “Augmented Architectural Details.” These will take the form of small-scale, spatially-calibrated AR overlays which explain assembly systems through human interaction. This workshop will provide students with tools to author digital content to a custom AR application using the Unity3D Game Engine. Students will gain skills in animation, interaction, lighting, and mesh optimization and learn how to translate these effects to an AR interface. In order to facilitate rapid experimentation with AR content, students will be provided with an AR application that will allow them to exchange and import Unity Asset Bundles onto their iOS or Android phones in real time. This will focus the workshop on developing technical and critical thinking skills around the topic of Augmented Architectural Details.
Description: Real time rendering and spatial computing represent a paradigm shift from static images, pre-rendered animations, and 2D media as the dominant form of architectural representation. The designer’s toolkit needs to be realigned to take advantage of these technologies and shift towards experiential and interaction based forms of design communication. Furthermore, spatial computing offers the ability to accurately overlay these real-time rendered digital experiences on top of real world contexts. We will focus this workshop on developing AR content for smart phones and with multiplayer frameworks. Producing AR content no longer requires specialized equipment, controlled conditions, or expensive hardware setups, but is accessible to anyone with a smartphone, leveling the playing field for producing architectural experiences and easily sharing them. Multiplayer frameworks, often used for game development, are now finding their way into these forms of architectural representations, allowing for remote collaboration. The workshop leaders will explore these ideas and technologies by teaching technical skills in Unity3D and leading the students through a workflow that allows them to build experiences on their phones. Students will be provided with a pre-published AR application that will allow them to exchange and import Unity Asset Bundles onto their iOS or Android devices without having to go through the full App publishing process.
Students will learn the following concepts/workflows during the workshop:
- Concepts around simultaneous localization and mapping (SLAM) and both “markered” and “markerless” AR.
- Modeling, lighting, rendering, interaction, and animation skills in the Unity3D Game Engine
- Specific workflows for authoring AR content in Unity3D
- How to calibrate relationships between virtual and real objects
- How to prototype and share Asset Bundle Packages for use in an AR application specifically created for this workshop
Building Web-Based Drawing Instruments
Workshop Leaders:
- Galo Canizares (Texas Tech University)
Abstract: Today, it is difficult to separate the process of designing and the software used to realize said design. While Alberti may have established one of the earliest distinctions between the ideal design and the physical realization of it, the emergence of software has collapsed this traditional flow of creativity to the extent that digital tools are no longer the tools for making: they are primarily tools for thinking. This workshop takes on the premise that software is an ingrained part of the creative process and that real-time technologies such as internet browsers and design tools push back not only on acts of design, but also on our social consciousness. We will find that to draw on a screen is not simply to dream a perfect image and reproduce it flawlessly with interface tools but is instead a collaborative negotiation with these powerful platforms and the data managed within them. Participants will investigate how web-based drawing apps can be easily built and deployed using open-source and freely accessible tools. Together we will cover introductory graphics programming with the p5js JavaScript library and deployment using NodeJS. Using code and internet browsers as sites for inquiry, participants will develop simple apps that showcase how architectural effects can be explored through the screen and engender discussions about perception and use.
Description: The purpose of this workshop is twofold: pragmatically, it is to learn how to use the programming language JavaScript (p5js) to make interactive and dynamic web-based installations, and conceptually, it is to discuss the accessibility of open-source design tools and programming as a liberating act. Participants will be introduced to several freely accessible tools and methods for writing code, publishing that code, and deploying it to an online server. We will cover the following software: Atom (for writing code), Git (for tracking versioning), Github (for hosting files), and Heroku (for deploying apps for free). We will also cover introductions to the p5js graphics programming library and the NodeJS runtime for packaging apps. The material outcome of this workshop will be individual web-based drawing apps accessible online immediately after the workshop (using Heroku). The contexts for this workshop are the tradition of architectural drawing and experimental software design. We will draw from both worlds to create drawing instruments (not necessarily tools that have utility) that can be tuned to achieve different visual effects. They may respond to drawing conventions such as snapping, dragging, clicking, and other physical interactions with software. As participants learn the practical methods, we will also actively discuss the role of software in design processes and the effects interfaces have on the design imaginary. The methods will be primarily step-by-step tutorials, interrupted by more conceptual mini-lectures and background on the technology being used. It is important to discuss the concept behind browsers, servers, creative coding, open-source tools, as well as their technical aspects. We will also address programming as a way of liberating oneself from the constraints of proprietary software. The goal is to explore what internet browsers and open-source programming libraries can offer designers and architects from a critical perspective beyond utility.
Co-crochet Computing Stitches for Collective and Distributed Crocheting
Workshop Leaders:
- Özgüç Bertuğ Çapunaman (Ph.D. Student, Department of Architecture, Pennsylvania State University)
- Cemal Koray Bingöl (PhD Student, Lecturer, Coordinator of Digital Fabrication Laboratory Department of Informatics, Istanbul Technical University)
- Benay Gürsoy (Assistant Professor, Director of ForMat Lab, Department of Architecture, Pennsylvania State University)
Abstract: Crocheting is a hands-on craft that involves repetitive manipulation of a single continuous thread with a hook-like tool to generate surfaces and 3D forms. The step-by-step stitching procedure in crocheting can be associated with algorithms of which the steps are defined through crochet patterns. Crochet patterns are text-based representations, similar to G-code in additive manufacturing. They enable the documentation and communication of crocheting know-how. In this workshop, participants from different locations will collectively design and crochet a branching spatial structure. Each participant will receive a Co-Crochet Kit prior to the workshop that will include crocheting materials. During the workshop, participants will first collectively design a branching structure and learn the basics of crocheting. They will then generate the crochet patterns of the components of the branching structure designed using a computer algorithm developed by the instructors that generates crochet patterns of 3D objects modeled in CAD software. At the end of the workshop, each participant will have the crochet patterns of at least one component to crochet. They will be provided a prepaid shipping label to ship the crocheted components to the instructors in the US. The instructors will assemble the crocheted components of the branching structure - “Voltron!”
Description: In this workshop, participants from different locations around the world will collectively design and crochet a branching spatial structure. Prior to the workshop, they will receive a Co-Crochet Kit that will include crocheting materials, such as crochet hooks and yarns. During the first day of the workshop, they will generate parametric branching structures in Grasshopper and discretize the structures into components. At the end of the first day, one design will be selected by the group. During the second day of the workshop, participants will be introduced to basic crocheting techniques for single crochet stitch and learn to use a computer algorithm to generate crocheting patterns for the components of the branching structure. Following the workshop, they will crochet the assigned components. They will be provided pre-paid shipping labels to ship the crocheted components to the instructors in the US. We will assemble the crocheted components into the branching structure.
Why will a Co-Crochet Kit be sent? Yarns and hooks used for crocheting significantly affect the outcome. Therefore, we want to make sure that all participants start by using the same type of yarn and hooks.
Why a branching structure? Through crocheting one can create 3D objects. The computer algorithm that we have developed can generate crocheting patterns of various types of 3D objects that are modeled in the computer, including branching structures. Since we want to collect and assemble the 3D components that the participants will crochet, we decided that a branching geometry would work best.
How to go from the 3D model in the computer to crocheting geometries? Physical constraints constitute the variables of the computer algorithm that we developed to generate crochet patterns of 3D objects modeled in the computer. These include determinate variables such as the material properties (yarn weight) and the tool size (crochet hook), but also indeterminate variables such as the effect of the crafter’s hand (grip on the yarn) while producing stitches. The latter, being unique for each individual crafter, is specified through a physical test swatch that the crafter crochets before running the computer algorithm.
From digits to stitches: What is the output of the computer algorithm? The output of the computer algorithm is a crochet pattern in conventional text form to materialize the digital model. Following the resulting pattern, one is able to crochet the digital model by hand. The overall process is thus a transition from the digital to the physical, where physical constraints continuously inform this transition and shape the outcome.
Wait, how do the “physical constraints continuously inform the transition from the digital to the physical and shape the outcome”? The computer algorithm generates the crochet patterns based on the 10x10 stitch-swatch that the users crochet before running the algorithm. The x-y dimensions of this stitch-swatch are the main inputs. This way, the physical variables associated with the yarn type, hook size, and most importantly crafter’s hand are combined into a single input. The crochet patterns are generated based on this input. This means that for the same 3D model, the computer algorithm generates unique crochet patterns for each individual. This way, the overall dimensions and forms of the discrete components that the participants of the workshop will crochet will be maintained and allow us to assemble them into a single structure.
Important Note: this workshop will require the use of a custom kit of materials/tools shipped by the instructors. Additional lead time and arrangements might be necessary to receive these on time. If you live outside the continental U.S. and believe you may have difficulties receiving the kit, please reach out to the instructors directly before enrolling in this workshop, or consider choosing a different one. ACADIA assumes no responsibility for late/lost kits, and cannot reimburse participants under these circumstances.
Enhancing Fungi-based Composite Materials With Computational Design And Robotic Fabrication
Workshop Leaders:
- Jonathan Dessi-Olive (Assistant Professor, Department of Architecture, Kansas State University)
- Omid Oliyan (Senior Computational Designer, Silman)
- Ali Seyedahmadian (Senior Design Engineer, Eventscape A+D)
Abstract: This workshop explores the capacities of computational systems to develop an integrative design to fabrication workflow for fungi-based materials. “Myco-materials” are composites made by entangling mushroom mycelium around agricultural or forestry wastes such as hemp or saw dust. As the need for zero-waste materials increases, myco-materials continue to garner attention from engineers, building scientists, and designers. Recently, imaginative architecture-scale structures made with myco-materials have used fabrication techniques canonically familiar to architecture including: modular bricks, custom molded blocks, fabric formed structures, and robotic printing. This workshop seeks to expand upon the existing constructive paradigms by asking: in the realization of smarter and stronger mycelium composite structures, what are the cooperative logics that have yet to be discovered between fungal growth, computational design, and digital fabrication? Participants will investigate the capacities of computational design and mixed reality (MR) through at-home experiments; gaining experience with the craft of growing mycelium. MR-guided weaving, winding, and knotting will be used to inform the design and production of flexible reinforcement lattices used to strengthen the surface and inner matrix of mycelium composites. Hand-work by the participants will inform robotic procedures to be demonstrated by the organizers through the fabrication of a prototype henceforth.
Description: One of the most significant challenges of using mycelium in large-scale structural applications, is that it is an inherently weak material. Depending on the substrate and the amount of compaction that is possible, most products made of a uniform composite matrix have a strength no more than high-density foam. Preliminary experiments with myco-materials by the organizers have revealed that diversifying the matrix through strategic sizing and orienting of agricultural strands and natural fibers can increase the strength of myco-materials both in their intermediary growth phase and their final inert phase. Such performative benefits would drastically expand the range of possible real-world applications of myco-material composites. The focus of this workshop is on the material itself, and how computational design tools and fabrication technologies will be essential to facilitate the success of mycelium materials in a future of sustainable building materials.
Participants will be invited to join these exciting first steps into a wide open field of research that seeks to develop structural resistance in mycelium composite materials through computational structural design, mixed reality, and robotic fabrication procedures. A series of introductory talks by the workshop organizers will provide the necessary background on materials, design tools, and fabrication methods. A custom web-based design tool developed by the organizers will be introduced that lets the participants explore internal reinforcing patterns for their own designs. Each participant will receive a kit that includes materials needed to fabricate their reinforcing lattices, using a combination of smartphone-based mixed reality and traditional craft techniques including weaving, knotting, and winding. Similar techniques creating reinforcement with robotic fabrication will be demonstrated by the organizers, followed by a panel discussion to conclude the workshop. Based on their ongoing research and the work accomplished during the workshop, the organizers intend to apply these robotic techniques toward a large-scale prototype, which will be shared during the main conference.
Important Note: this workshop will require the use of a custom kit of materials/tools shipped by the instructors. Additional lead time and arrangements might be necessary to receive these on time. If you live outside the continental U.S. and believe you may have difficulties receiving the kit, please reach out to the instructors directly before enrolling in this workshop, or consider choosing a different one. ACADIA assumes no responsibility for late/lost kits, and cannot reimburse participants under these circumstances.
Knitted Growth: Scaffolds For Living Root Spans
Workshop Leaders:
- Mariana Popescu (TU Delft)
- Robin Oval (University of Cambridge)
Abstract: This workshop will explore the design of deployable knitted textile membranes as scaffolds for climbing plants. The knitted textile is designed to guide the growth of a plant into spans following principles of structure and placing material (in this case growth) where needed. The playful example is inspired by the living root bridges of Northern India, which are made by guiding the roots of a tree over a stream, allowing them to grow into spanning structures over time. These growing structures are used as an illustration for aspects related to the design and fabrication of knitted textile moulds that can be used as efficient, lightweight and deployable formwork for complex geometries. During the workshop participants will learn how to use form finding methods to design tensile structures including non-manifold and non-orientable geometries. Structural and fabrication considerations will be highlighted through topology explorations and the deliberate choice of singularities and segmenting of the overall geometry. In considering how to best guide the growth paths through the textile, participants will zoom in on the design of specific textile features such as ribs, openings, channels, and textures. A spanning design will be fabricated after the completion of the workshop.
Description: The workshop will be conducted as a computational design workshop for fabrication that deals with all the components of designing, fabricating and building tensile knitted structures. The following aspects will be addressed:
- Design and form-finding tensile structures (including - non-manifold and non-orientable geometries)
- Topology explorations (placing singularities, features, - segmenting overall geometry)
- Principles of knitted tactile design
- Workflow for fabricating knitted textiles.
- The workshop will be run using Rhino, Grasshopper and COMPAS, a - Python-based computational framework for collaboration and - research in architecture, structures and fabrication.
Physics Towards Critical Assemblies
Workshop Leaders:
- Daniela Atencio (Universidad de Los Andes, Bogotá)
- Nicolas Turchi (Zaha Hadid Architects, University of Bologna, Polytechnic of Milan)
Abstract: The workshop will explore procedural workflows by integrating the use of Autodesk Maya, MASH, and Grasshopper to expose the participants to polygon modeling, parametric design and animation techniques using physics. Through a sequence of step by step operations, the workshop will demonstrate how a clear process can turn into a plethora of outcomes that can be further discretized, evaluated and assimilated into a final design proposal. The workshop will be focusing on the object, both as a static and a dynamic entity. This will be subject to computational procedures catalogued into categories including ‘Addition’, ‘Distribution’, ‘Morphing’, ‘Instancing’, ‘Aggregation’ and ‘Projection’. Architectural and tectonic qualities will begin to emerge while being tested against the use of contemporary representational techniques including animation. Participants will learn integrated and hybrid workflows by using multiple platforms in synergy while experimenting and defining the character of their ‘superobjects’.
Description: The workshop is oriented to students and professionals interested to learn procedural workflows to critical assemblies and physics. Each phase will start with an overall explanation of the software and interface and will end with a Q&A and design session.
Phase 1 Additive Fields & Forces: Grasshopper. Distinct-Additive workflow focused on geometrical additions manipulated by field forces. Remapped images are used as initial notional values for the definition of digital aggregations in meshes generated in Grasshopper. Mesh and image resolutions dictate the intricacy of such values and are directly proportional to the number of vectors within them and proportional to the amount of digital matter that can be generated. Analyzing them will allow them to filter curvature information to create distinct force fields through vectors that create categorized patterns for three-dimensional formal expressions along with the mesh. Forces with various speeds and intensities are introduced and re-introduced, seen as a means of influences in computable forces that influence digital matter generating discrete pipes and/or primitives (boxes, spheres) informed by such forces. Algorithmically computable forces are transformed into additive computational assemblies allowing in its materialization the definition of every component that forms the overall geometry in an evolved and tectonic – or a self-aggregation system. The more influence, the more complexity in digital perceptive and distinct conditions. In terms of representation, artificial lights and shadows are introduced as part of the reinforcement and manipulation of geometry to highlight unseen conditions within the object.
Participants will learn (Rhinoceros, Grasshopper):
- Introduction to Rhinoceros and Grasshopper interface.
- Generation of solids (surfaces and meshes).
- Image sampling and remapping.
- Mesh distributions and subdivisions.
- Mesh population and abundance.
- Mesh simplex and 4D noise alterations.
- Mesh curvature analysis.
- Mesh data filtering.
- Analysis and filtering mesh complex patterns.
- Fields and forces operations.
- Piping, discrete curves, rails, operations.
- Artificial lights, shadows and real time rendering.
These operations are all performed within Rhino/Grasshopper, in combination with a set of plug-ins that include Weaverbird, Mesh analysis, Pufferfish, Lunchbox. *See relevant images attached as examples.
Phase 2 Additive Fields & Forces in motion: MASH Maya. After introducing the participants to Maya basic polygonal modeling, this workflow focuses on form-finding and real-time representation as design strategy by controlling time and physics (influencers, orientations, speed, collisions, resistance) in Maya’s MASH to produce synthetic qualities and conditions. The geometry (curves, meshes, faces) are altered through duplicates and repetitions with mirror cuts based on Cartesian axes allowing to create the collision and intersection between geometries. The second condition adds to it a single unit -predefined changeable form- that will be repeated in every edge, vertex or face of the geometry, and can be distributed and organized proportionally to extrapolate a kind of sensible geometry that can be exposed to dynamic variables. Physics and time-based notations -computable effects such as influencers with orientation, rotation, and offset- are introduced through computer-controlled simulations, becoming a factor within the process that allows the progressive self-calculations and automation of infinite alterations.
Participants will learn (Autodesk Maya):
- Introduction to Maya interface.
- Introduction to basic polygonal modeling.
- Introduction to Maya MASH.
- Introduction to animation tools.
- Animated operations such as dynamic mirroring and morphing.
- Animated parts of an object (curves, vertex, faces).
- Time-based techniques to create motion and interactive form-finding.
- Real-time design alternating and manipulating time, forces and physics such as resistance, collisions, orientations, influencers and the object itself.
- Real-time representation using lights and materials in Arnold.
*See relevant images attached as examples. The results are videos, but in the examples are a sequence of frames since only images were accepted.
Phase 3 Superobject: After effects. Combining previous phases into a mutable and hybrid dynamic representation.
The students will learn (After Effects):
- Introduction to After Effects interface.
- Media production and combination of frames.
- Introduction and manipulation of presets.
- Video rendering.
The Generative Game
Workshop Leaders:
- Runjia Tian (Machine Learning Researcher Lab for Design Technologies Harvard Graduate School of Design, Boston MA)
- Zhaoyang Luos (Ph.D candidate, Graduate Research Assistant and Teaching Fellow Architectural Digital Design and Technology Institute (ADDTI), Harbin Institute of Technology, Harbin, China)
- Linhai Shen (Master in Architecture, Graduate Research Assistant)
Abstract: This 2-day workshop will cover a series of hands-on advanced 3D machine learning techniques for generative architecture design, including but not limited to 2D Latent Walk Techniques, 2.5D Latent Walk Shape Synthesis, 3DGAN and Reinforcement Learning. The workshop will be project based, where participants will be asked to first create datasets or study explicit quantifiable metrics for generative architecture design, and then develop or customize generative architectural designs based on provided sample workflows.
The workshop will also examine the various types of artificial intelligence that are applied as cutting-edge generative architectural design techniques since the invention of the concept “Computer-Aided Architectural Design” (Mitchell Williams 1975), the paradigm for modern generative architecture design systems. The workshop will reconceptualize Mitchell’s CAAD system consisting of a representation system, a generation system and a testing system into a figurative framework of a generative machine that plays the generative game. The workshop will use this framework to examine state-of-the-art (SOTA) generative architecture design and explore new possibilities beyond existing methods. The workshop will also guide students through the implementation of SOTA systems and experiment with realizing generative design in 3D representations and post-processing results for architecture design with a series of hands-on sessions.
Description: The workshop will start with a brief introduction lecture on day one about the overview of the history of generative architecture design, the origin of which dates back to the SketchPad by Ian Sutherland (Sutherland 1964), the Architecture Machine proposed by Nicholas Negroponte in 1970 (Negroponte 1970) and the Computer-Aided Architectural Design Systems by Mitchell Williams in 1975 (Mitchell 1975). Mitchell’s proposal of computer-aided architectural design (CAAD) systems then became the cornerstone paradigm for the modern generative design systems for architecture. Mitchell’s CAAD system is composed of a representation system, a generation system and a testing system. We will use this framework to examine state-of-the-art (SOTA) generative architecture design and explore new possibilities beyond existing methods.
The workshop will guide students through the implementation of SOTA systems and experiment with realizing generative design in 3D representations and post-processing results for architecture design with a series of hands-on sessions. We will provide three tracks as choices for student:
- Track 1: Use image processing to translate 2D generative design to 3D with latent walk techniques
- Track 2: Use 3DGAN for directly implement 3D generative design with voxels
- Track 3: Use the open-source project RhinoBIM
Students will learn the following topics in the workshop:
- Track1:
- How to use StyleGAN for 2D image generation
- How to connect StyleGAN to Rhino-Grasshopper for real-time inference and latent walk
- How to generate 2.5D/3D shape using image processing and image lofting techniques
- Track 2:
- How to use prepare 3D datasets for 3D GAN with binvox
- How to implement training of 3DGAN with open-source deep learning framework PyTorch
- How to use 3DGAN for shape synthesis in voxel format and mesh format
- How to use 3DGAN for real-time inference and visualization with Latent Walk
- Track 3:
- Gain basic understanding of reinforcement learning and computational cognitive science
- Learn how machine intelligence is used in the AEC industry
Form-Finding Staircases With COMPAS CEM
Workshop Leaders:
- Rafael Pastrana (Ph.D. Student, CREATE Laboratory, School of Architecture, Princeton University, USA)
- Isabel Moreira de Oliveira (Ph.D. Student, Form Finding Lab, School of Engineering and Applied Science, Princeton University, USA)
- Patrick Ole Ohlbrock (Postdoctoral Researcher, Chair of Structural Design, ETH Zürich, Switzerland)
- Pierluigi D’Acunto (Assistant Professor, Professorship of Structural Design, Technical University of Munich, Germany)
Abstract: In this workshop, participants will learn how to generate the geometry of lightweight and expressive structures using the Combinatorial Equilibrium Modeling (CEM) framework, a form-finding method based on vector-based graphic statics, and its computational implementation, COMPAS CEM. COMPAS CEM is a new design tool written in pure Python that optimizes structural geometries to best meet geometry and force-related constraints. To this end, the tool uses automatic differentiation, a set of algorithmic techniques responsible for making modern machine learning models to learn.
Through a series of guided exercises, participants will gain hands-on experience with COMPAS CEM, and learn how to use it from their command line interface and inside Grasshopper. The theme of the workshop focuses on the design of the load-bearing structure of a staircase, an intimate spatial bridge that connects different floors and areas in a building. Participants will explore digitally how the manipulation of the connectivity and the internal force states in a structure can steer the generation of a catalog of forms that are elegant, safe and material-efficient. Besides acquiring foundational understanding of how constrained form-finding works, participants will walk out of the workshop with a set of digital form-found structures, which can be seamlessly ingested by other packages in the COMPAS ecosystem.
Description: Our main intent is to illustrate how constrained form-finding approaches can be integrated into contemporary design processes to create performant shapes from the ground up. We share the vision that structural performance can co-exist with other non-structural design objectives, and that numerical optimization is one way to achieve such integration efficiently and transparently. The workshop will follow three main thrusts. We will first introduce participants to the relevant theoretical background of form-finding, the CEM framework and numerical optimization with condensed lectures. To get practical experience with our software stack, participants will be guided next through a number of hands-on exercises that gradually increase in complexity. We plan to close the workshop with a capstone exercise with the help of the instruction team. Participants will be asked to use the tools and concepts learned during the workshop to form-find the structural geometry of a new staircase, or to replicate the structural geometry of an existing staircase of their choice.
Remote Robotic Assemblies
Workshop Leaders:
- Stefana Parascho (Assistant Professor, CREATE Laboratory, Princeton University)
- Edvard P.G. Bruun (PhD Candidate, Form Finding Lab / CREATE Laboratory, Princeton University)
- Gonzalo Casas (Software Engineer, Gramazio Kohler Research, ETH Zurich)
- Beverly Lytle(Software Engineer, Gramazio Kohler Research, ETH Zurich)
Abstract: Robotic fabrication in architecture and design relies on a wide range of computational tools and methods for designing, planning, and controlling robots. Through this workshop we aim to address the challenge in learning and accessing such tools, which represents a major barrier to entry for architects wishing to utilize robots more centrally within their work. By introducing participants to the open source framework COMPAS FAB for robotic fabrication and COMPAS RRC for robot control, we provide a central platform for the simulation and remote control of robots used for fabrication at the architectural scale. Workshop participants will design a space-frame structure from their homes; this structure will then be fabricated remotely in the Embodied Computation Lab at Princeton University.
Description: Robotic fabrication has become an integral part of the architectural discipline. Both academia and practice are focusing their efforts on unlocking the potential of robotic processes in architectural design and construction. However, due to the inherent interdisciplinarity of the field there is a lack of easily accessible and streamlined robotic communication tools specific to architectural applications. Such tools are necessary to better serve the emergent needs of architects and designers looking to work more with this developing technology. In addition, the global pandemic has made access to robotic laboratories more challenging than ever, leaving students and researchers to limit their work to simulation and virtual environments. This workshop aims to address this gap by introducing the participants to newly developed remote communication tools for robotic control.
The goal of this workshop is to take advantage of the unique circumstances of this year’s ACADIA online conference in order to increase the general accessibility of robotic fabrication tools and methods to participants all over the world. Specifically, we aim to introduce participants to the COMPAS FAB library and utilize it to enable the remote control of physical robots.
COMPAS FAB is the robotic fabrication package for the COMPAS Framework facilitating the planning and execution of robotic fabrication processes. It provides interfaces to existing software libraries and tools available in the field of robotics (e.g. OMPL, ROS) and makes them accessible from within a parametric design environment (Rhino Grasshopper).
Participants will learn robotic fabrication methods using the COMPAS Framework. Beginning from the fundamentals of robotics, and moving through forward and inverse kinematics, path planning and collision detection, the lessons will culminate with an application of all these building blocks to the assembly/disassembly of a space-frame structure chosen as a case-study. The physical setup that we will use for the fabrication of the space frame is located in the Embodied Computation Lab at Princeton University, and consists of two ABB IRB 4600-2.55m robots mounted on linear tracks IRBT 4004.
The case-study will focus on implementing cooperative robotic assembly and disassembly processes. Robotic fabrication has been mostly used to address construction of new structures with individualized geometries that cannot be easily constructed by hand. In addition to this, we aim to explore the versatility of robotic processes by proposing both an assembly and disassembly cycle to demonstrate the potential use of robots beyond only the assembly of new structures. The chosen case-study will allow participants to explore the complexity of a multi-robotic set-up, as well as that of spatial assembly tasks; this will demonstrate the potential of robotic path-planning, simulation and online communication. In contrast to the traditional method of using pre-defined robotic code, this case-study gives designers the possibility to adapt, react and interact with the built structure, blurring the separation between design and construction.
Through the COMPAS RRC package, participants will be able to directly connect and send commands from their computers at home to the physical robots located in Princeton, USA. We will conclude the workshop with a demonstration: using two robots to cooperatively assemble and disassemble a wooden space-frame structure at the Embodied Computation Lab in Princeton.
Generative Design and Analysis in Early-Stage Planning with Spacemaker
Workshop Leaders:
- Christoph Becker (Spacemaker)
- Lilli Smith (Autodesk)
- Zach Kron (Autodesk)
Description: Learn about generative design and real-time multi-factor analysis in early-stage planning with Spacemaker. A significant part of value creation occurs in the early planning stages. However, a disproportionately low percentage of tech-investments is allocated to this segment. While uninformed decisions in this phase drive costs in later stages, too often, crucial information is not yet readily available and needs to be built up as we build our design on multiple assumptions. Smart algorithms, automated data-fetching, and the use of geo-referenced digital 3D twins of your site from day one can help solve this issue. How can we use the full potential of manual, parametric, and generative design features, and cloud technology to boost our design capabilities? How can we use real-time analysis to connect program, solar, wind, acoustic, view, and sustainability data to building form? How can we capture the power of AI while keeping full control over the results and always remaining in the driver’s seat? This workshop will introduce a rethought, networked workflow for early-stage site planning. Participants will learn how to frontload their design with information and make data driven decisions, spending less time on manual setup and more on creative decision making. Reduce risk with constant live-feedback and improve value by optimizing for density and living qualities at the same time. Each day will contain a combination of presentations and hands-on exercises. Participants will be contestants and jury in a small design competition at the conclusion of the workshop.
Setup Requirements
- Chrome browser
- Valid e-mail address
- It will be helpful if at all possible, to have 2 screens available for Zoom + Spacemaker. Please contact Lilli.Smith@autodesk.com if you have any problems with the set up so that we can try to help you in advance and won’t have to troubleshoot during the workshop.
Pre-Reading To get the most out of the workshop, it will be helpful to be familiar with these resources:
- Watch the first 4 Spacemaker video tutorials you find under “introduction” following this link (This takes less than 20 min)
- We encourage all participants to also watch the remaining video tutorials (this takes about 1h)