The creation of molecular nanotechnology (highly capable molecular machines that assemble products with atomic precision) could rival or surpass the impact of the Industrial Revolution.
Our physical production capacity could be increased by many orders of magnitude while making our current computers, medicines, and machines look primitive by comparison. Nanotech, despite decades of infighting and setbacks, remains the world’s most underrated technology, and
I believe the time to build nanotech is now.
Recent advances in deep learning, hardware engineering, and protein design, along with the rise of more ambitious sources of private capital for funding and the internet for coordinating talent, capital, and media, lay the foundation for two promising nanotech approaches to emerge.
The direct path to nanotech uses scanning probe microscopes to construct the parts and structures necessary to rapidly build molecular machines atom by atom while the indirect path uses specially designed proteins to iteratively work towards inorganic molecular machines.
The direct path suffers from moderate near-term economic challenges and an initial slow rate of physical iteration (a theoretical maximum of one atom placed per second) but offers a solid, though not complete, theoretical basis for achieving molecular nanotechnology.
Meanwhile, the indirect path has inverted challenges, in that there is an almost infinite abundance of economic opportunities in chemicals, materials, and medicine alongside a relatively rapid rate of physical iteration, but the theoretical basis for achieving molecular nanotechnology is less sound than the direct path.
This piece covers the technical, economic, cultural, and regulatory aspects of each approach, synthesizing them into the conclusion that a private company should be created to do both approaches at the same time to minimize economic risk and maximize the probability of achieving molecular nanotechnology.
What are you waiting for? The future of physical technology awaits.
Table Of Contents
Brief Historical Context
Why Now?
Why Not MEMs and E-Beams?
Why Do A Joint Structure?
Why SPMs?
Opportunities
Grand Challenges
Technical De-risking
Cultural Element
Commercialization
Why Proteins?
Advantages
The Main Problem
Technical De-risking
Regulations
Commercialization
Brief Technical Overview
R&D Computational Methods
Automated Assays
Process Scale-Up
Manufacturing
Additional Questions
Conclusion
Brief Historical Context
The history of molecular nanotechnology could fill multiple books, but here is the condensed version with the relevant technical highlights.
In 1959, Richard Feynman gave a lecture at the American Physical Society about the potential of manipulating matter at the level of individual atoms.
The idea then languished for years until Norio Taniguchi, a Professor at the University of Tokyo coined the term “nano-technology” in reference to how future machining techniques could separate, consolidate, or deform materials one atom or molecule at a time.
However, it was the combination of Genentech’s success with recombinant DNA in the 1970s and the rise of advanced computational modeling that led K. Eric Drexler1, an MIT engineer, to write the first paper on modern molecular nanotechnology.
This paper, “Molecular engineering: An approach to the development of general capabilities for molecular manipulation,” had a core thesis that enzymes, with their high chemo-, regio-, and stereoselectivity, could be used alongside other proteins to assemble protein-like nanomachines out of stiffer, less fragile materials than proteins. It would then follow that these second generation nanomachines would make even more capable third generation machines, and so on, repeating iteratively until the limits of physics and chemistry were found. At that point, molecular machines would have dramatically advanced every area of physical technology. In addition to proteins, alternative approaches towards this vision that have been proposed include:
Microelectromechanical systems (MEMs). MEMs are microscopic devices made using semiconductor processing techniques that incorporate electronic and moving parts which are frequently used as motion sensors in phones and cars. The idea for using MEMs to create nanotechnology is often credited to Feynman in his 1959 APS talk but was in fact given to him by a friend. This friend observed that the industrial revolution was an exercise in using large tools to make smaller and smaller tools iteratively and that the tele-operated robotic arms used in nuclear reactors could do this iteratively down to the nanoscale. J Storrs Hall, the former president of the Foresight Institute and noted author, is a noted fan of this method.
Electron Beam (E-Beam) Methods. There are a variety of electron beam methods (EBL, SEM, TEM, STEM, Aberration-Corrected STEM, etc.) but the core concept is that a highly focused beam of electrons is used to either image or pattern materials at the nanoscale. Due to the success of the semiconductor industry, this is the form of “nanotechnology” that most people interact with on a daily basis.
Scanning Probe Microscopes (SPMs). The technical description for how these operate depend on whether you’re using a scanning tunneling microscope (STM) or an atomic force microscopy (AFM), with non-contact (nc-AFM) being a prominent variant. However, the core technology revolves around using an atomically sharp tip that is controlled by piezoelectric actuators to manipulate individual atoms, either through electric or mechanical forces. Notable work includes Don Eigler’s 1990 paper where he positioned 35 xenon atoms into the letters “IBM.”
These visions were refined in various books, conferences, and even heated debates with Nobel Laureates, but, despite varying degrees of progress being made in each discipline, the high-level vision of building molecular machinery was pushed to the side in favor of valuable but incremental materials chemistry work.
Now, nearly 21 years after the infamous Drexler-Smalley debate, I believe that there is a compelling “why now?” for pursuing molecular nanotechnology.
Why Now?
The biggest trend over the past decade is this: humanity has acquired the compute, data, and algorithms necessary to make deep learning work for a variety of tasks.
The most prominent has been language modeling, which in late 2022 brought a flurry of attention to deep learning’s progress, but DeepMind’s work on protein structure prediction with AI is the most relevant for molecular nanotechnology. The protein folding problem (alongside the “inverse protein folding problem”) which plagued early conceptions of biochemical molecular nanotechnology has now become tractable. It is important to note that it (especially its inverse version) is not fully solved, however, contrary to popular conception.
In addition, Moore’s law has continued on, granting us chips with features in the nanometers, and when combined with advances in mRNA vaccines, self-assembling DNA, and organic synthesis, it seems obvious that the frontiers of science are heading towards the nanoscale. I am not the first to notice this (that honor would fall to Michael Nielsen).
Furthermore, hardware engineering, both as a cultural phenomena and a practice, has come back en vogue, leading many to re-evaluate the potential of scanning probe microscopes for nanotechnology due to their limited commercial development.
Finally, two important social technologies have been developed since past occurrences in molecular nanotechnology.
First, private sources of capital are willing to be significantly more experimental than they were in the early 2000s when molecular nanotechnology was at its zenith.
Second, the internet has cut the marginal cost of content distribution to near zero, allowing for the connection of like-minded people to rapidly communicate, coordinate, and thus advance the field of molecular nanotechnology.2
Thus, I argue that now is the time to go build nanotech. But how should we go about this task?
Why Not MEMs and E-Beams?
There are four prominent pathways to develop molecular nanotechnology (MEMs, E-Beams, SPMs, Proteins) but two of them stand out: SPMs and Proteins. These will each get their own sections. This section is dedicated to explaining why the other two routes (MEMs and E-Beams), while valuable, are not on the critical path to nanotech.
MEMs:
The primary bottleneck for MEMs is stiction, the unintentional adhesion of two microstructures that is caused due to their extraordinarily high surface area to volume ratio (which increases as size decreases) where restoring forces are unable to overcome inertial forces. This problem could theoretically be solved with atomically precise surfaces, where a phenomenon called superlubricity occurs, leading to near-zero but not zero frictional forces. However, forms of MEMs would suffer from this greatly until said point, as even though one could use ALD coatings to reduce stictional effects, they still exist as a major hindrance. This is why there are no commercial Class-IV (rubbing surfaces) MEMs devices and the ones with motion that are commercialized that aren’t sensors are typically only bending a mirror by a few degrees at most (DLP projector chips).
E-Beams:
The two primary bottlenecks for e-beams are that one cannot create nanoscale machines solely through patterning and one can’t make large, macroscopic quantities of material using e-beams, as that is not the point of lithographic techniques. As a counterargument, some combination of three-dimensional epitaxial methods and E-beams could use additive and subtractive manufacturing, respectively, to create nanoscale structures, but these would not create machines. I wholly expected research in e-beams to continue for advancements in the semiconductor and quantum technology industries, particularly for memristor technology.
Why Do A Joint Structure?
It’s now time to cover the two most promising approaches to nanotechnology: SPMs and proteins. However, before we dive into these, I want to address why I believe the best way to advance these approaches is a private company that pursues both approaches simultaneously. The two counter-arguments to this claim typically follow the lines of 1) “Why do both approaches instead of just one?” and 2) “Why a private company instead of an FRO?”
To answer the first question, while it is a harder and somewhat more capital intensive path than the “just pick one” approach, I believe this way 1) maximizes the probability that molecular nanotechnology happens, 2) is what I would do if I had an “abundance mindset” and 3), as a side effect, has very nice synergies.
To expand upon 1), the theoretical basis for scanning probes to achieve molecular nanotechnology is less shaky than the basis for proteins to do so, but nothing done by scanning probes has ever become macroscopic while we can make macroscopic quantities of complex proteins from day one.
For 2), I look to Retro Biosciences, a multi-program longevity biotechnology company funded by Sam Altman and Rippling, a champion of the “compound startup model,” as character studies. While there is a lot of conventional wisdom about doing one thing and one thing only, I believe that nanotechnology is such a difficult problem that betting everything on only one approach is suboptimal. Furthermore, if initial funding wasn’t a major barrier then I would just do both approaches anyways. (This isn’t an endorsement of not being frugal, but rather having a large quantity of ambition.)
For 3), since the end goal is the same for both and much of the laboratory and computational approaches can be reused between the two (particularly when it comes to molecular dynamics simulations), it’s not as crazy as it sounds. (Although it might be a tad contrarian.)
To answer the second question, FROs are great for certain things, and while it could be a useful tool for advancing the direct path, I don’t think it’s the appropriate structure. For one, funding with venture is quicker, much more plentiful, and has less restrictions than philanthropic funding. Second, these approaches would be commercialized anyways, so setting up the structure as a nonprofit and then transitioning would be dicey. (Look at what happened to OpenAI…let’s not do that.) Third, the protein approach is commercially viable from day one and the direct approach does not require much money to de-risk before doing creative monetization schemes.
Why SPMs?
Opportunities
I’m calling this first section “opportunities” because there are a lot of low-hanging fruits to improve scanning probe microscopes.
First, some hurdles that SPMs face include setting up liquid helium cryogenics, ultra high vacuum (UHV) chambers, expensive control electronics, and various forms of noise (thermal, acoustic, creep, hysteresis, resonance, electrical) reduction. Out of these hurdles, none of them are impossible to work with, and when it comes to the expensive control electronics (which are marked up many multiples of what the actual components are because the market is so niche) there is significant room for improvements on the axes of control and performance.
Second, SPMs are currently limited to 3DOF (XYZ) positioning, and to build complex diamondoid structures 5DOF or 6DOF is likely necessary, which is highly non-trivial to solve unless you have precise control of the tip of the SPM. Moving to non-diamondoid materials systems, such as silicon or silicon carbide, have been proposed, although little is known experimentally about their behavior. This could be overcome through simulations and initial experimental work, however.
Third, flat materials surfaces are hard to prepare or can get “dirty” really fast depending on your level of vacuum. For example, a Si(111)-7x7 surface is relatively easy to prepare but can quickly get dirty. However an Au(111) surface, which is slightly harder to prepare, is much less reactive and as a side benefit can be used as a base in order to create a self-assembled monolayer of tripods that can hold initial feedstock molecules. (This UCLA PhD thesis from Sam Lilak covers this in much more depth.)
Fourth, STMs, which are faster than AFMs and nc-AFMs, are still not very fast in part due to the poor quality of the digital-to-analog converters used in the control electronics and the low resonant frequency of the “scanhead.” (That can be increased by making the scanhead stiffer and smaller, but the resonant frequency can only go so high). At the most you might end up with 1 atom/second of placement speed, and this has only been demonstrated with simple materials systems in two dimensions. Again, one could make complex structures with this limitation, but it severely restricts the rate of large-scale physical iteration that could occur initially.
Fifth, one viewpoint in regards to the prior “problem” is that once we are able to make better tooltips for the AFMs and simple nanomechanical parts then nanoscale robotic arms for manipulation could be constructed, perhaps with carbon nanotubes as structural elements (since they have been attached to scanning probes before) and efforts could be sped up. (This is why the lack of macroscopic large-scale parallelism exhibited in SPMs is not a “good” long-term argument against their success. That is not the critical condition, although large-scale parallelism would be useful once scanning probes are made to be cheaper and more reliable. ) Even then, there is a lack of computational designs that exist for future parts (although that problem could be solved with sufficient effort) and broad open questions about how far up the ladder of order of magnitudes (from the single atom to macroscopic scale) replicative efforts could go.
Grand Challenges
There are two problems with SPMs that stand out to me as “grand challenges” in the field: controlling the structure of the SPM tip, and then finding a way to reverse individual mechanosyntheic reactions.
First, controlling the structure of the SPM tip is a hard problem. Attaching even a simple CO molecule in order to do subatomic scanning on an AFM is non-trivial, but there could be ways around this. To start, there was a Nature proposal in 1990 by Eric Drexler and John Foster to attach proteins or other molecules to the end of an SPM tip in order to gain more precise control or new capabilities. (This is also, unintentionally, a benefit of pursuing both SPMs and proteins: there are synergies like this that pop up.) Furthermore, I do think that simply running more experiments with various tips will allow us to better understand the mechanisms of degradation and failure for tips, since (this is a broad problem in the SPM community), there is no mass-produced, low-cost, reliable atomic manipulation SPM that exists right now. (This also will help with de-risking, as I’ll mention in the next section.)
Second, individual mechanosynthetic reactions can sometimes be irreversible and unreliable, although this is a disputed claim due to how few reactions like this have been done. This problem is only exacerbated due to how slow most current commercial SPMs are, severely hampering progress on this pathway. The solution to this seems to be improving the speed, correction quality (via computer vision, better control electronics and real-time software, etc.), and sheer number of experiments done to build familiarity with SPM manipulation tasks.
Technical De-risking
The primary way to de-risk the SPM approach would not be through computation (DFT/ab initio and molecular mechanics methods) but rather through experimentation.
In particular, constructing a low-cost STM and then iteratively updating it (particularly the control electronics and later the software to account for creep, hysteresis, and such) would be the key factor here. This is validated by Zyvex Labs, an atomically precise manufacturing company, focusing heavily on the areas of control software and electronics as well.
Liquid helium cryogenics, vacuum chambers and their associated pumps, self-assembled monolayer tripod chemistry and other important factors for true research capabilities are non-trivial but are not the true bottlenecks for advancing the SPM route.
Once the control software and electronics have been sufficiently improved, then there are a variety of experiments that would need to be performed with a variety of materials systems in order to properly scope out the field.
For example, the series of emails outlined between Dr. Philip Moriarty and Chris Peterson (with features from Hal Finney) offers a good start in terms of technical criticism such as “general purpose assemblers don’t exist, mechanosynthesis wouldn’t work for any materials, before asking for millions in government funding (not even VC funding, government funding) actually have a detailed plan,” and so on. J. Storrs Hall’s piece for the Abundance Institute about molecular nanotechnology argued that “...we must first develop the basic manufacturing technology capable of controllably and reliably building atomically precise 3D objects at the 10–1,000 atom size level.” I think this would be a worthy intermediate goal, but first gaining reliability at the two-dimensional level since three-dimensional manipulation via a SPM has not been achieved, the closest has been a 3D atomic disassembly via an STM. The second half of the 2007 Battelle-Foresight nanotech roadmap covers more papers and directions in detail.
Cultural Element
The video “A Boy And His Atom” from IBM, where (from what I could tell) they manipulated carbon monoxide atoms on a gold surface in LHe using an STM, has 24 million views on YouTube.
There is an outsized opportunity to do similar initiatives using the capabilities developed here. Scanning probe microscopy is an inherently visual technical field, and if unique capabilities are developed then they should be properly utilized for advertising, hype, updates, and business development purposes. Be creative!
Commercialization
The two categories for initial products that SPMs could produce include creating quantum devices for commercial purposes or creating nanoscale devices for research purposes.
First, creating quantum devices for commercial purposes would look like either creating sensors or computers. Quantum computers would likely look like the work that Michelle Simmons group is doing, while the quantum sensors would be nitrogen-vacancy (NV) center diamond magnetometers or perhaps more exotic single-atom spectroscopy. It is important to note that initial versions of quantum technologies could be more easily created with semiconductor processing technologies. An example of this is Diraq, a quantum computing company leveraging GlobalFoundries’s existing infrastructure. However, the “bull case” for scanning probe microscopes with respect to quantum technologies is that they will be able to overcome decoherence and other notable issues due to having a greater ability to manipulate many atoms than any other group or organization.
Second, creating small devices for research purposes (such as nanoelectronics research) would work as a contract research organization model, where for a price we could run experiments. This exists in molecular biology (with cloud lab companies such as Strateos), but I think that this business could be interesting because of how high value add the work could be compared to the lower average value for a given molecular biology experiment. A major uncertainty here would be the quantity of the demand, which is likely not going to be extremely large (>50M USD) but is likely to exist due to the lack of high-quality, modern CROs for nanotechnology experiments.
Finally, although this idea is unorthodox, I believe that creating “custom design as a service” is a natural extension of the contract research organization business and could be highly profitable. As I mentioned in the prior “cultural element,” there is a large potential for virality here, and companies such as monumental labs have similar enough initial business models. (If someone would pay for a custom macroscopic sculpture, why not an atomic one?)
Why Proteins?
Advantages
First, I believe that proteins are the best way to align progress in deep learning with progress in nanotechnology. Many of the tasks necessary for designing molecular machinery, from transition state modeling to inverse folding to structure prediction, most famously done with DeepMind’s AlphaFold2, are not easily or quickly calculated by simple mathematical models. Instead, deep learning models, given sufficient data in the pre-training and fine-tuning stages, can enable enormous advances due to how rapid digital iteration is. Notably, both the computational and experimental data to feed these models are growing fast.
Second, proteins have been many R&D and manufacturing advances, from decreasing costs in DNA sequencing and synthesis (with some nuanced commentary of how much we can compare these to the 1965 paper Gordon E. Moore wrote about integrated circuits), retrosynthetic methods, wet lab robotics, microfluidics, proteomics, optically-controlled proteins and so on.
Third, from a market perspective, which I will explore more, the opportunities available for capable proteins (medicine, chemicals, materials, etc.) are quite large and could be expanded as the capabilities of molecular machines increase. (Of course, as will be explained in the economics and technical sections, while the core advances would be in molecular machinery, we do not have to expect them to be the best tool for every step of every product line to start.)
Finally, the amount of students and professionals being trained in protein engineering is rapidly growing due to the field’s popularity, leading there to be a large potential workforce to pull from in order to sustain both R&D and commercialization efforts.
The Main Problem
There are some concerns with proteins that, despite their ability to function in a “solution phase”, the path from proteins to inorganic machines which work in vacuum-based environments is not clear. However, as I’ll further explore in the technical de-risking section, I am cautiously optimistic about this challenge. (I would like to note that the design problems of complex molecular machinery and nanofactories have not been rigorously solved by the SPM approach either. This is an open question.)
The first generation of protein nanomachines will not be able to create diamondoid nanomechanical computers or highly complex nanofactories, but they will be able to catalyze all of the necessary reactions to lay the groundwork for inorganic machines, such as carbon-carbon bonds.
Furthermore, progress in computational models and experimental methods surrounding self-assembled protein complexes is rapidly advancing, which is likely to be necessary in constructing more complex molecular machinery. These complexes can stabilize more challenging transition states and can provide structural frameworks for future machines. In particular, the perspective that I enjoy the most (framed by Adam Marblestone) is that you could create fully-addressable surfaces and structures but with “stiffer, finer-grained and more chemically controllable protein scaffolds rather than DNA origami scaffolds.” (This could be a way to address concerns about termination control that have been brought up by commentators.)
Finally, I think that Adam Marblestone again phrased a core R&D challenge that will be found out one way or another using computation and experimentation: “In addition to fleshing out such a design(s) and then doing a ton of work on its pieces, a research program would want to ask questions like: can a printer that is structurally made from DNA origami eventually make a printer that is structurally made from peptide bricks or spiroligomers? Can that printer make a printer that is made from something even better (e.g., stiffer), like some kind of metal-organic complex? And so on, eventually getting to, say, a diamondoid printer. Even in theory. This could be a modeling exercise.”
I believe it should be possible on some basis of faith and intuition but I know that I need to acquire the numbers to back that up. The next section details what that de-risking could look like.
Technical De-risking
While there are some approaches that involve soluble gemstones or small molecules as non-protein nanotech, it seems like the first order of business is to advance computational protein design to a level where de novo proteins are fully-addressable in the same way DNA nanotechnology is. That could then provide a solid technical foundation in order to progress into more inorganic molecular machines. Alexis Courbet and his collaborators at the Institute for Protein Design are the most focused on this technical challenge, so it would be prudent to further discuss with them.
Regulations
The purpose of this section is not to highlight every single sentence of every single relevant regulation, but rather to “scope out” the most prominent ones to ensure a practical mindset is maintained. Now, it will be unclear what the regulatory environment will look like in the future due to the uncertainty of national politics, but there are three main regulatory bodies that many of the potential products made with protein nanotech would have to interact with: the FDA, EPA, and USDA. (Note: there are also other important regulatory bodies relevant to manufacturing that include but are not limited to the ISO, OSHA, UL, and State of California.)
The regulations that the FDA imposes are complex depending on the product produced (Celine Hailoua has a good starter essay on this), such as whether it’s a new therapeutic, medical device, diagnostic, or other product. However, acquiring a cGMP (Current Good Manufacturing Practice) certification for facilities is a common thread. For cosmetics, which are also regulated by the FDA, the standard tends to be GMP, which is a less stringent version of cGMP, but that is not always the case depending on the company.
When it comes to the EPA, they are especially concerned about new chemical products, which are regulated under the toxic substances control act (TSCA). This means that if you want to mass-manufacture (>10,000kg) a new chemical then you need to file a pre-manufacture notice (PMN) at least 90 days prior to the mass manufacturing of said chemical. It is important to note that one can receive reduced regulatory scrutiny or exemptions depending on what the product is, which includes a reduced low volume exemption (LVE) and an R&D exemption.
Finally, for the USDA, many of the major regulations surround the usage of synthetic substances on “organic” foods, along with the usage of biotechnology in order to modify crops and animals. Chemical and materials regulations for agriculture would also be placed under the EPA, as previously discussed.
Commercialization
There are many markets to go for with protein nanotechnology, but the most relevant initial ones are likely going to be in advanced catalysis, particularly for complex, high-value products. Out of these I would say that the synthesis of active pharmaceutical ingredients (APIs) and natural ingredients are particularly compelling. Letters of intent (LOIs) could be procured for both these areas and additional future ones to show proof of business development. (Yes, they are non-binding, but it’s much better than nothing and could be upgraded into more stringent contracts.)
From a strategy perspective, I argue it would be prudent to keep the product novelty down for the moment (as in, not producing new polymers via biocatalysis or going into highly technical biomedical markets) to quickly get an initial “win” that could be scaled up industrially. After that, it will be easier to take significantly larger product risks and to expand into new markets.
Brief Technical Overview
The amount of technical challenges in R&D and productization for protein molecular machines is vast, but they can be roughly grouped into four categories: R&D computational methods, automated assay screening, scale-up, and manufacturing.
R&D Computational Methods
In order to develop protein nanotechnologies, a broad variety of (frequently deep-learning based) computational methods will need to be employed. The core functions would be transition state modeling, generative design for proteins, and dynamics modeling. I also want to note that due to how fast AI is moving the specific model names will likely be outdated soon.
Deep-learning driven transition state modeling would be essential for building out chemical reaction networks and identifying specific reactions that one could design enzymes and protein complexes around to construct products and advanced molecular machines. Deep learning would be used because traditional methods for determining transition states, such as DFT, are computationally intensive, and are only used because of the transient nature of transition states. (Often, these structures only live on a timescale of femtoseconds, so while they could be characterized by techniques like ultrafast millimeter-wave vibrational spectra, these are expensive and cannot be universally applied in all reaction types.) Heather Kulik’s lab at MIT is at the forefront of training deep learning models, both diffusion (stochastic sampling process) and optimal transport-based (not stochastic sampling process), to model transition states. There are many areas of improvement for these models, but I believe that they should work mostly fine for many organic reactions, which would be well suited for protein catalysts to start.
For the design of the actual proteins, it depends on whether the goal is to create a catalyst (which makes the most use of the prior transition state models, since the active site of the protein stabilizes the relevant transition state in order to catalyze it), structure, or binder, but the general workflow for de novo design is as follows.
First, we’ll use Rosetta or another computational platform (but typically Rosetta), one can design a protein backbone that can then be fed into a diffusion model such as RFDiffusion. This diffusion model generates a relevant protein and then that protein is fed into an “inverse folding” neural network such as ProteinMPNN to generate a sequence which corresponds to the structure of said generated protein. Then, to validate this, that sequence is then fed into a structure prediction model such as AlphaFold2 to verify that the structure created by RFDiffusion will fold or assemble as desired.
Finally, once these proteins are created, they should ideally be simulated for a variety of reasons. Whether you call them machine learning force fields or neural net potentials, training deep learning models on DFT or molecular mechanics data is the future of simulating complex molecules without needing infinite compute. There is enormous potential here that has not been explored, which is best seen in a very recent paper from Microsoft Research.This opinion of utilizing molecular dynamics and molecular mechanics models in order to enhance our understanding of protein dynamics is also championed by many in-domain writers online.
Automated Assays
I argue that the ideal way to do high-throughput rapid functional assay screening is to use standardized cell-free expression systems. While there are differences between expressing a protein in a microbe versus in a cell-free environment, I believe that the iteration speed gains would make up for this. The first iteration of any new protein is likely going to be suboptimal, so those would be optimized both in silico and in vitro as quickly as possible before the final “scale-worthy” protein is found and then expressed in a microbe.
The four step process for this would roughly follow these steps based off a “self-driving lab” concept: Assemble your given gene (DNA synthesis), PCR that gene to larger quantities, use cell-free methods to express that gene as a protein, and then use functional or structural assays to determine the properties of said protein before repeating the process. This process took the group who did this around 9 hours per run (1 hour for the gene assembly, 1 hour for the PCR, 3 hours for the cell-free expression and then 3 hours for their specific thermostability assay, with an hour in between for spacing.) The costs for 20 runs of their optimization with a batch size of three was 5,200 USD (2,400 USD for the DNA fragments, 1,300 USD for all the reagents and 1,500 USD for the Strateos Cloud Lab).
The time here could be sped up by using top quality thermal cyclers for the PCR, using high energy pyruvate and other optimizations for the cell free expression. The assays are a tricky place for cutting down on time because depending on your assay and what you’re looking for it could be quite quick or take many hours. Overall the authors noted that a “single 2.5-month pause caused by shipping delays” was the biggest bottleneck for their automated lab, which didn’t even include microfluidics techniques like HT-MEK in order to speed up the process and increase the throughput.
Process Scale-Up
Process-scale up is not to be underestimated, since it’s easy to make something work in the lab but hard to make it work at scale. However, in this case I think it wouldn’t be quite as bad as trying to scale, say, a novel chemical reaction, because microbial fermentation has a fast doubling time and there are numerous startups working to make the 250mL and 5L bioreactor capacity standardized and modernized. That is actually quite nice because then we could maintain our focus on going for nanotech rather than being concerned with an additional technical problem.
Manufacturing
I am going to assume that the product, regardless of whether it’s a new diagnostic, chemical, material or something else, is going to be manufactured using an optimized precision fermentation system that creates various proteins in order to perform tasks.
There isn’t much to mention here, in part because this is somewhat removed from where I am now, but the raw quantity of fermentation capacity necessary is not going to be as extreme as some may expect. If this was done in the traditional synthetic biology paradigm (microbes make the raw end product), there would likely be supply chain bottlenecks and poor unit economics. Having the microbes make the enzymes or proteins that are then used for purposes bypass both of these large-scale problems, and if for catalysis free enzymes are used rather than immobilization then that could further reduce supply chain complexity. The raw quantity of proteins necessary for diagnostics or therapeutics are trivial compared to the quantity necessary for chemical or materials manufacturing, so the original position stands.
Finally, one major concern for the unit economics of manufacturing is actually the cost of distribution. For high-margin products such as APIs, natural ingredients or other things of that nature this is less of a concern, but if commodities are ever produced then it would be wise to have production be as local as possible to minimize transportation costs.
Additional Questions
Question 1: Should Solid-Phase Peptide Synthesis (SSPS) be used to make modular peptide nanotechnology? What about spiroligomers? .
For SSPS, the primary concern I have is with the complexity and variety of the catalysis reaction that could occur. Alternative problems such as the rate of protein synthesis, multiple purification steps, low yields, and so on do not seem like fundamental blockers in the way that being unable to do complex catalysis is. However, utilizing SSPS as “helpers” to assist in various syntheses may prove fruitful.
My perspective on spiroligomers is that their stiffness and unique properties are attractive but we could just use traditional de novo protein design to achieve similar technical milestones.
Question 2: De novo design has been frequently mentioned in this white paper, but directed evolution and mutagenesis have not. What are your thoughts concerning these techniques?
I would not want to rule out either technique, since they can be useful in specific situations, but due to the complexity and novelty of the proteins required to achieve molecular nanotechnology, de novo design is a better fit for the majority of the required research and development. I could see directed evolution and mutagenesis techniques being used in order to modify newly created de novo nanomachines in order to optimize them. However, even then, I would prefer to do these computationally rather than experimentally if possible.
Question 3: Should we be building nanomechanical computers or synthetic biology-based computers in order to control the protein nanomachines?
Right now, I do not think that would be a prudent goal. Proteins are generally not stiff enough in order to do nanomechanical reversible link-logic computing and while there are many fascinating implications that come from synthetic biology-driven computing, it does not seem to be a technology that would directly replace either CMOS or nanomechanical computing for our research, development, and deployment tasks.
As an aside, David Baker has written a paper on controlling semiconductor material nucleation with proteins, and while the extension of that could potentially lead to something like Asimov Press’s story “Tinker” by Richard Ngo, I do not believe that it is path critical at the moment.
Question 4: Proteins can be quite sensitive to conditions such as temperature, pH, and solvents. How would you deal with this?
While some variants of monomeric enzymes may be weak to their environments, specially designed enzymes (we can be creative) and protein complexes (due to their greater structural stability) will be able to withstand more challenging conditions. Furthermore, un-optimized proteins need to only serve as initial versions of molecular machines. I expect that as the capabilities of molecular machines grow then their resilience will as well.
Question 5: Some might argue that computational models do not fully account for the real world, and that heavily relying on them will not be as useful as spending that time on more experimentation. What is your response?
While I do agree that repeated, well-measured physical experimentation is the gold standard, computational methods can be quite accurate when used in specific contexts and are significantly easier to scale and iterate upon than wet-lab experiments.
Conclusion
The creation of molecular nanotechnology (highly capable molecular machines that assemble products with atomic precision) could rival or surpass the impact of the Industrial Revolution.
As was explained in the paper, I believe the best way to achieve molecular nanotechnology is through a private company focused on both scanning probe microscopy and protein design.
Thus, if you are an investor, technologist (particularly in AI, protein design, or scanning probe microscopy), or just a nanotechnology enthusiast, then I would appreciate further communication with you via my X account @jacobrintamaki or my email address, jrin at stanford dot edu.
Some key insights around K. Eric Drexler’s psychology are as follows: Drexler stated that having read “the limits to growth” inspired him to look into aerospace technology, since this work did not mention space when it came to growth limitations. During this he became colleagues with O’Neill when O’Neill had made his first mass driver, and that led Drexler, during the “biotech optimism” of the 1970s and 80s to investigate the limits of technology, not just space. This is what led him to the earliest ideas of biochemical molecular nanotechnology.
Christine Peterson, MIT chemist, Foresight Institute co-founder and Drexler’s ex-wife, noted that the internet as a means of communication and private capital being more experimental were two of her greatest reasons for hope in 2024 around the development of nanotechnology.