Discover more from RealTech Weekly
RTN: Zombie VCs and what that means for founders
+ DeepMinds new prediction model and huge computing clusters
The RealTech Conference is next Wednesday and tickets are sold out but if you’d like to join the waitlist then sign up here.
We’re hosting a panel (speakers below) on ‘Raising later-stage funding for Frontier Tech companies’
If you’re a founder of a scaled Frontier Tech company and want to help fellow founders in less than 2 minutes 🙏🏻
Please consider filling out this very quick survey on what metrics are needed to raise a Deeptech Series A round. Survey → here
I will be sharing the results here, in the coming weeks.
Zombie VCs, rationalisation of the VC world and advice for founders
Numerous founders messaged me this week about VC funds going out of business and what this means for startup founders. tl;dr: The winding down of a very small sub-set of small managers will cause short-term volatility for some founders but, in the long-term, this will be good for an asset class that has been overcapitalised.
Up and to the right
Over the last cycle, with the emergence of ‘seed’ and ‘late stage’ capital, the venture industry evolved from being a niche alternative into a sizeable, multi-stage asset class. It went from being the Hermes Birkin bag (expensive, hard to find and very valuable if you hold on to it long enough), to Costco (price as the main discriminator, something for everyone).
Largely, the capital flowing into VC increased because low rates caused investors to look elsewhere for yields and due to simultaneous shifts in cloud and mobile technologies.
When is venture is not venture
A key characteristic of the last cycle was venture firms expanding and co-opting non-venture risk dollars, moving into growth equity (‘Late Stage’ and ‘Technology Growth’ in the chart above). This is a move away from the historical norm of venture being used to fund risky R&D, in advance of revenues. Google, which IPO’d 5 years after founding, having raised only an angel and Series A round, is an example of that old venture norm.
Down and to the right
Now, the economic conditions which allowed venture to grow in the last cycle are disappearing. Interest rates will be higher for the foreseeable future, valuation mutliples have contracted and the effect of poor investment decisions made when capital was too readily available (in 2021) is starting to show in the VC ecosystem. Funding has slowed, more companies are dying and some companies remain overvalued. My view, as I previously covered, is that late-stage/growth capital will be hit the hardest. Founders should not expect much Series B+ capital.
This contraction in capital is affecting both startups and fund managers. There are a few reasons some managers are packing it in:
The VC has made enough money and doesn’t need the headache of a difficult next fundraise (likely larger institutional grade fund managers)
The VC hasn’t made money and feels there is an opportunity cost in pursuing another (likely sub-scale boutique seed managers)
Tourist capital, some CVCs, cross-over investors and family offices have retrenched and may not return to market
What does this mean for founders?
Generally, there is not much to be concerned about. If you have raised from institutional-grade VCs, it is unlikely you will be affected; these managers will actively manage follow-on capital and board coverage. VCs are paid management fees over a 10-year period; even if they are not net new investments, they will manage current vehicles and investments over their whole lifecycle.
If you are a founder who has raised from VCs in the second and third bullet points above, you need to work out how dependent you are on your VC’s follow-on capital and on their continued support and engagement. You also need to understand how the manager expects to wind the fund down: Will they oversee it personally, will the LPs step in, or will they sell the portfolio to a secondaries fund?
A needed correction in VC
Venture is a constrained asset class that is likely going back to the historical norm of being small, early, very risky and producing high asymmetry of risk/return. The contraction of unicorns, startups and VCs is a normal part of the cyclen and the natural senescence of markets. Whilst the shake out of managers is creating volatility for startups, who might be wondering where their VC has gone, mid to long term it will rationalise the market.
Rocket Lab are launching a hypersonic for DIU (Space News)
China’s Langsoon races towards 7nm chips in 2024 (Tom’s Hardware)
Namibia is the first African country to run a green-iron plant (Business Insider)
Energy and Climate
Rolls Royce is the first manufacturer to confirm that all their engines can run on sustainable aviation fuel. People are hyped on SAF for a few reasons; aviation is 3% of global GHGs, SAF usage contributes to a reduction of up to 80% of CO2 emissions, it doesn’t require massive reinvestment in new infrastructure or propulsion systems.
SAF might be a big lever to reduce emissions in the sector, especially for long-haul flights. SAFs are not a panacea and some notable issues; production capacity needs to be scaled by 5x and costs need to drop dramatically for them to make sense. The biggest issue is the feedstocks that produce SAF require an uneconomical amount of farmland. By some estimates, we would need 20-50% of developed countries’ farmland to produce SAFs, which is obviously not feasible given competing interests.
DeepMind just published a graph neural net (GNN) model which delivers significantly improved 10-day weather predictions in under 10 minutes.
GraphCast beat state-of-the-art weather forecasts on 99% of predictions and critically requires minimal compute. The model has 36.7m parameters and can be trained on a single TPU. Currently, most weather forecasts are run on huge national supercomputing clusters, such as the US’ NOAA which has some 15,000+ CPUs and costs $100s of millions to build and run.
DeepMind shows AIs ability to improve and understand real-world chaotic physics systems and massively reduce the need for complex compute (more below). It’s interesting to see DeepMind focus most of its efforts on high-value complex problems (physics, robotics, protein folding) versus low-value text generation
Google just trained a LLM which required 10^18 FLOPs to be trained on 50,944 Cloud TPU v5e chips, spanning 199 distributed Cloud TPU pods. To do so, they built and released ‘Google Cloud TPU Multislice Training’, an architecture which allows efficient end-to-end training of large models on distributed clusters.
LLM model size has been growing by 750x every two years, whilst Moore’s Law a mere 2x. Critically, as Google shows, the efficiency of these systems scales linearly. You want more compute, then you’ve got to rack more pods. In a world of exponential model sizes, we are currently being limited by the production of chips and the availability of grid power, something has to give.
Coatue just published an estimate of GPU demand:
🦾 Manufacturing and Robotics
ETH Zurich has just used a novel 3D printing technique to produce a new form of human-like hardware.
We’ve seen large increases in language models and computer vision, applied to robotics over the last year, yet less progress on the hardware side. ETH, and MIT startup Inkbit, have managed to use a novel additive manufacturing process combined with UV curing, a similar process is used in industrials for glue drying. This has allowed them to print both rigid and elastic materials simultaneously, in very high resolution. To demonstrate the potential of the new 3D printing process, the researchers printed a robotic hand. The device features rigid bones modelled on MRI scans of human hands and elastic tendons that can be connected to servos to curl the fingers in toward the palm.