Robotics is entering a new phase, driven by advances in AI perception, learning and planning, often described as “physical AI”. These technologies are expanding what robots can attempt in the real world, from industrial environments to healthcare and logistics.
Arthur Richards
Professor of Robotics and Control, University of Bristol and co-director of the Bristol Robotics Lab
Robotics has always carried a certain promise. For decades we have imagined machines that work alongside people, helping with everyday tasks and making life easier. A fun and often romantic notion driven by popular culture, from Robby the Robot in Forbidden Planet to R2-D2 in Star Wars.
In practice, robotics has developed rather differently. Most robots today operate in tightly controlled environments such as factories and warehouses, where their movements can be carefully defined in advance. These systems are incredibly effective at repetitive, structured tasks, but far less suited to the unpredictable nature of the real world.
What is beginning to change that picture is artificial intelligence.
AI is introducing a new way of thinking about how robots are designed and operate. Traditionally, robots have been controlled through carefully written code. Engineers would build a model of the machine and then use that model to decide how it should move, react and interact with objects around it. Robotics required detailed understanding of the machine and a great deal of programming to translate that knowledge into action.
AI is introducing a different approach. Techniques such as reinforcement learning and visual-language-action models allow robots to learn behaviours from data rather than relying entirely on explicit instructions. In simple terms, instead of telling a robot exactly what to do, we can increasingly train it to work things out for itself.
That change is already influencing the way robotics research is carried out. Instead of focusing purely on modelling and programming, researchers now spend much more time thinking about data, where it comes from, how much is needed, and how systems can learn from it. Robotics is becoming more data-driven, and that has implications for how the field develops and for the kinds of skills future roboticists will need.
AI is also opening the door to richer interaction between robots and the physical world. Researchers are developing sensing systems that go beyond simple contact switches to give machines a more nuanced understanding of touch. Rather than detecting only whether something has been grasped, robots can begin to interpret shape, texture and grip, which is essential when handling objects or working alongside people.
At the same time, there are limits to what robots can currently do. Modern systems are becoming very capable at perceiving their environment, recognising objects, identifying people and mapping the world around them. But understanding what those environments mean, or predicting how situations might evolve, remains a more complex challenge.
Even so, if AI can help robots operate more flexibly and safely, it opens the possibility of machines working outside the carefully controlled environments they inhabit today. That would allow robots to move beyond cages and production lines and begin to operate alongside people in far more varied settings.
For those of us working in robotics, that remains one of the most interesting and demanding challenges in the field, and indeed, one that industry is pushing us to solve.
A digital publication from Bristol Innovations, University of Bristol
Upcoming Event
A panel discussion exploring where robots have genuinely improved through AI, why demos often exceed practical performance, and how safety and trust affect adoption.
Hosted by Bristol Innovations
Register on EventbriteAI improves vision, quality inspection, optimisation and faster changeovers
Integration with legacy systems; ROI justification rather than technical feasibility
AI enables navigation, object recognition, fleet coordination and adaptability
Safety certification, human–robot interaction, system integration at scale
AI supports precision, decision assistance and workflow efficiency
Trust, liability, validation requirements and regulatory approval
AI improves perception, navigation and task planning in variable conditions
Environmental unpredictability, safety risk, limited training data
AI enables crop monitoring, selective harvesting and autonomous operation
Weather variability, reliability, economics of deployment
AI reduces human exposure and supports remote or autonomous operation
Assurance, reliability under edge conditions, high cost of failure
AI enables learning, manipulation and flexible task execution
Safety, validation, cost, energy use and lack of clear near-term use cases
Sources: International Federation of Robotics; Deloitte; Barclays Research; The Robot Report; BI Foresight analysis.
| Robotics domain | AI’s primary role | Practical industry value | Deployment constraints |
|---|---|---|---|
| Industrial robot arms (manufacturing) | AI-assisted perception & optimisation | Flexible automation, quality inspection, changeovers | Legacy integration, ROI, safety governance |
| Warehouse & logistics AMRs | Perception + motion planning + coordination | Route optimisation, dynamic obstacle avoidance | Certification, human interaction, scalability |
| Healthcare & surgical robotics | Perception + safety monitoring | Assistance in precision tasks, patient monitoring | Trust, liability, regulation, clinical validation |
| Construction & field robots | Perception + adaptive planning | Terrain mapping, navigation assistance | Environment variability, reliability risk |
| Cobots (collaborative robots) | Human proximity perception | Safe co-working with humans, adaptability | Safety assurance, human trust & intent prediction |
| Humanoid robots (experimental) | Perception + complex planning | Flexible manipulation in unstructured spaces | High cost, limited real-world validation |
Source: International Federation of Robotics, Position Paper on AI in Robotics; BI Foresight analysis.
1Based on expert opinions. 2Traditional machine learning (ML) techniques allow robots to nearly reach this humanlike capability, but foundation models are needed to surpass it. 3Traditional ML techniques are sufficient to reach superhuman ability. 4Foundation models are essential even to match human ability. 5Traditional ML techniques (eg, convolutional neural networks) allow robots to nearly reach this human ability, but foundation models are needed to surpass it. 6Traditional ML techniques (if complemented by hardware advances) are sufficient to reach superhuman ability; foundation models can help but are not essential. 7Classical software (less advanced than ML) is sufficient to exceed human ability since the bar for human ability is quite low. 8Foundation models are essential to exceed human capability; state-of-the-art large language models have already likely exceeded baseline human ability. 9Foundation models are essential to exceed human capability; state-of-the-art vision language models have already likely exceeded baseline human ability. 10Limited scope for ML or foundation models since classical techniques have already exceeded human baseline for safety.
Source: McKinsey & Company; BI Foresight analysis.
Source: Mark Osis, Raquel Buscaino, and Caroline Brown, “Robotics & physical AI,” Deloitte, 2025.
“AI is transforming the field of robotics at a rapid pace,” Takayuki Ito, President of the International Federation of Robotics said recently, adding: “integrating AI into robotics enhances capabilities, increases efficiency and improves adaptability. This development is transforming AI from a supporting technology into a powerful enabler, opening the door to wider robot adoption across industries.”
It’s a powerful message and one that is being amplified as we see AI systems maturing at exponential rates, while the hardware components needed to build robots fall in price.
So-called physical AI will allow machines to function autonomously and perform tasks in the real world, bringing a host of potential new roles for robots. And there are huge benefits for AI systems too – learning from real-world data would take them to the next level of understanding and intelligence.
Rudimentary robots are already among us, sometimes popping up in the most surprising of places. On a recent visit to a café on the Isle of Portland in Dorset my coffee was served by a robot called Bella. A pedestal-based wheeled bot (to ensure she didn’t fall over), Bella had elements of humanity via a screen which had eyes, and a female voice.
And, despite the fact that she wasn’t very good at her job – in fact her repeated refrain was ‘please can you let me get through, I need to get by’ – she was a hit with customers. Meanwhile, on a recent press visit to Estonia, I came across a more traditional robot arm in a timber factory deep in the forest.
According to a 2024 report from the International Federation of Robotics, 4.6 million units were operating in factories around the world. Robots can work more efficiently and faster than humans, and they can take on long night shifts without getting tired or losing concentration.
In a recent white paper, the World Economic Forum forecast that physical AI would kickstart “a new breed of smarter, more agile industrial robots”. It pointed to early adopters such as Amazon and Foxconn and the benefits they are deriving – improved efficiency, faster delivery times and the creation of new skilled jobs.
Amazon has more than one million robots in operation, sorting, lifting and carrying packages. These range from robot arms that manipulate packages to more Roomba- style bots that move around warehouses carrying vast payloads. Some of them – such as Titan and Hercules – are confined to areas authorised only for robots, reading barcodes that are stuck to the floor as navigation coordinates.
Proteus is Amazon’s first fully autonomous mobile robot, meaning it can navigate freely throughout a site using sensors to detect and avoid objects in front of it. It is also rumoured that Amazon is testing humanoid robots that could in future deliver parcels to homes.
The other key area where robot use is growing is healthcare. According to the NHS, half a million operations will be supported by robotic surgery over the next decade while US market leader Intuitive Surgical reports that more than 20 million procedures have been performed using its systems.
Further out and with some ethical considerations is the use of humanoid robots in healthcare settings, for example helping the elderly in care homes. Robots like Bella are already in use in some hospitals and clinics.
According to RBC Capital Markets, the addressable market for humanoid robots will be worth $9 trillion by 2050 and it believes China will account for more than 60% of that
Chinese firms such as Unitree are leading the field. In February 2026, the firm’s G1 humanoid robot walked over 130,000 steps on a -47.4°C snowfield, marking the first autonomous walking by a humanoid robot in extreme cold conditions.
China also leads the world when it comes to industrial robots with more than two million factory robots according to the IFR.
And, according to Morgan Stanley, the country has issued five times as many robot-related patents compared to the US in the last five years. With the US playing catch-up, recent reports suggest that the US Commerce Secretary Howard Lutnick has been meeting with CEOs of robotics companies to form a plan to accelerate the industry.
The country has some interesting companies working in robotics. US start-up Figure, backed by OpenAI, is on its third generation of humanoid robot, utilising a system dubbed Helix which it says will allow the bot to “navigate unpredictable, ever-changing home environments”. The firm envisages Figure being used in both home and industrial settings.
At the firm’s headquarters, the robot is quite literally being put through its paces – recent photos show it running alongside human employees. It is also learning a series of household tasks, such as folding laundry.
But it faces challenges from Elon Musk’s Tesla, which is also building a humanoid robot dubbed Optimus. In a recent earnings report, Tesla said that mass production of the robot will begin in 2026.
Often humanoid robots are limited to demonstrations, usually very carefully orchestrated ones, aiming to get investors excited about the possibilities and provide PR for the firms.
And these can still go wrong. Russia’s first AI-powered humanoid robot, named AIdol, fell face-down on stage just seconds after its debut at a technology event in Moscow in November 2025.
Meanwhile China is going a stage further - showcasing its robotics industry at events such as the humanoid half-marathon (only six out of 21 robots finished a recent race) and the World’s Humanoid Robot Games, which features hundreds of bots from countries around the world. They compete in traditional sports such as athletics and football, as well as performing everyday tasks. Chinese robotics firm Unitree often dominates.
We have seen huge leaps in the ability of machines to understand natural language with systems such as GPT 4 passing the Turing test (when human and machine are indistinguishable).
These breakthroughs mean robots can converse with humans in the same way we talk to chatbots and virtual assistants. Similarly, breakthroughs in computer vision are giving robots eyes on the world.
Visual AI was pioneered by people like Professor Fei Fei Li who worked on ImageNet, the first large-scale visual learning dataset. She recently warned that transferring human-like vision to robots remains elusive though. While human spatial and visual awareness evolved over many millennia, acting as a bridge “between perception and survival”, robot vision has not had such a pedigree.
Robots need to go back to school to prepare themselves for the real world.
Professor Li kick-started the effort to teach this new generation of bots with a start-up called World Labs, building frontier model AIs that can perceive and interact with the 3D world.
Chip giant Nvidia is also busy creating simulated training camps for robots.
These environments can train robots more quickly and efficiently than simply putting them in the real world, and they are also helping see problems in human/robot interaction before they arise. In a BMW factory in China, for example, Nvidia’s Omniverse system was used to train autonomous mobile robots moving around the factory. It discovered that robots were being blinded by sunlight shining in the window at a certain angle at a certain time of day. The solution was simple and low tech - add shutters to the window.
It is crucial that robots don’t just see the world but understand it context and complexity.
UK autonomous vehicle firm Wayve has a vision model called Gaia. If it sees a ball in the road ahead obscured by a parked car, it will know that the ball is not the only risk - a child likely to run out from behind it - because it has become smart enough to know that children often throw balls.
Building general purpose autonomy needs many stages and a whole host of new data before it can become a reality.
Source: PwC/Strategy& analysis
Working at the frontier of robotics and AI? The Bristol Innovations Zone connects businesses with world-leading researchers to turn R&D ambitions into real outcomes. Find out how.
Explore BIZ
Paul Miller
Principal analyst, Forrester Research
We’ve had industrial robots for 60 years and they tend to do high volume, low variance tasks. As we start to get more opportunities with AI we see that change.
It becomes easier to program the robots to follow specific rules using systems such as Siemens Industrial Copilot for example. Using AI simulations of virtual worlds to let the robots practice tasks hundreds or thousands of times, to gain skills they can then employ in the real world. And here we are looking at things like Nvidia’s Omniverse and Isaac Sim, for example
Also AI has changed a robot’s ability to communicate with systems like Gemini Robotics or Wayve’s work on Lingo, a system that links VLA (vision, language and action).
A robot may watch 1,000 videos of someone holding a ball and letting go of it and so it knows that 1,000 times out of 1,000 when you let go of a ball it falls. But that doesn’t mean that it understands Newton’s laws. It doesn’t understand wind resistance and it doesn’t understand the laws of physics. It’s observing and inferring. The more data you have, the closer you get to something like understanding.
Building a map in the physical space is important and companies such as Hexagon and Siemens are doing that. They already have a lot of data that has been used for factory design or maintenance.
Robots can also be used to gather information, and Ford is using robotic dogs fitted with lidar cameras at its plant in Michigan to gather data about the plant.
It is a huge challenge and a huge concern but one that is being actively addressed.
One of the key things to think about is the difference between robots and the Generative Ai models that we use is that all of the LLM models are non-deterministic and therefore hallucinate and get things wrong.
With robots, you need to build some determinism is there. You need some facts and you need some guardrails. Boundaries that are hard-coded and cannot be crossed. And you need to design a system that fails safe.
You can design robots so they won’t go near a person, or you can design them so that they will stop if a person moves in front of it. But how do you deal with things if they go wrong? Robots are heavy and if the power fails, how do you make sure the robot doesn’t instantly topple over? If the vision system fails, how do you make sure the robot stops? Or if a banana is squishier than the one it was trained to pick up?
You need to map the space the robot will be in and the task it will be doing. You need to understand what the hand-off points are in that task and program all of that but then be able to cope with so-called edge cases – so what if the lighting is at slightly different angle than in the training data set?
The obvious home remains those fairly controlled environments, such as factories, warehouse and also places like mines.
It will then move a little more slowly into those areas where it interacts with ordinary people.
We are seeing robots that delivery food in restaurants or room service in hotels, but those robots tend not to have arms and legs. Instead they will trundle along on a cart with a tray on it.
Britbots is an investment firm for UK-based robotic start-ups and has backed over 50 companies around the UK.
When investing we look for three things – businesses that are able to demonstrate how they can transform whatever they are focused on in a really meaningful way. The second thing is companies which have the potential for international reach. And the third thing is more about the people – backing people who are credible and will have a fighting chance of actually being able to market and sell their robots rather than just build them.
The VLA models, which are effectively the physical world equivalent of a large language model is an area we’re really active in and we’ve got a number of companies which are at the cutting edge of developing those models.
It’s an area that we have to tread quite carefully in though as I suspect we are still a number of years away from having the kind of systems which can immediately manifest themselves in great commercial results.
You have got to be quite selective in the battles you fight. Small UK companies are not going to compete in the world of humanoid robots because these things are backed by enormous tech companies or states such as China. But on the business-to-business side and industrial side, there is almost a limitless selection of applications that need to be automated and most need a bespoke system.
There are a number of niche areas where the UK is particularly strong. We’re quite active in autonomous vehicles at sea for example and we have four or five companies that are genuine world leaders. Perhaps not surprising given that we’ve got so much coast around this country.
Extend Robotics is a UK-based start-up that develops virtual reality interfaces for controlling robots and was recognised by Nvidia as one of the top five robotics companies in the UK.
We build the software which can intuitively be used to connect to a robot and remotely operate it from anywhere in the world. The kit has super low latency and uses virtual reality.. Anybody can use and control the robot which is a game changer for the industry. And it means robots can be deployed very quickly. Traditionally you would need very high levels of expertise but with our system you just plug and play.
We’re working with Leyland trucks in Preston where we’re deploying robots that can paint trucks and assemble parts. Jobs that are either toxic environments for humans or monotonous jobs that require very long hours.
In agriculture we have several major projects, one for strawberry picking and tomato picking and one with grape picking. These are very high value crops in an industry where it has been difficult to find workers.
We have also partnered with Nvidia because we have found that simulation provides the backbone for training robots.
In the UK there are lots of industries which are still manual, and this is because there is a very high level of variation in them, so they have traditionally avoided automation. We have smart robots to address these problems.
Our vision is to provide intelligence to robots. That transition of skills does take human involvement. With our software a skilled human can train the robot and teach them things like dextrous movements. And we can collect data from those dextrous movements and that can be millimeter level tasks such as screwing a bolt. It would be very expensive to program those. With our software because of the embodied AI machine learning technology this is achievable. On the journey of having a skilled human and then a skilled robot, there are lots of steps between these two spectrums.
Rory focus areas for the UK’s technology trade association include robotics, immersive, photonics, space, innovation policy and university spin-outs.
Several universities have ramped up activities around robotics plus AI as a departmental focus. They are seeing robotics less about hardware and engineering and more about the software and the intelligence behind the robotics. TechUK is working with Nvidia and QA (the UK’s largest training provider) to look at offering apprenticeships to join up the pipeline so that universities have a really high calibre of students with industry experience in robotics plus AI.
It’s about applying a lot of the work that we’ve already done around digital services – standards, guardrails, assurance mechanisms – and apply them to physical systems. We are working with the likes of the Alan Turing Institute, the Open Data Institute and the Ada Lovelace Institute and we run an annual digital ethics summit.
There is very little robotics-specific regulation. But, as an example, we have a drone company that can deliver blood samples between hospitals. But for the loading onto a drone and the offloading they need to deploy a robot to trundle along the corridors to pick up the sample and take it to the clinician. To allow this, the firm comes across a whole number of different overlapping regulatory regimes, things like data collection and health and safety. The Regulatory Innovation Office (announced in October 2024) is bringing all these around the table to unpick this complex web of regulation and streamline the path to deployment for companies.
We’re pushing for more ambition. There is an immense opportunity for the UK to extend AI beyond the screen into hospitals, schools, roads, you name it. So it’s a question of how we deploy it and scale it and seize the opportunity as quickly as possible. There are huge potential gains around productivity to be had.
It’s partly about education and upskilling VCs and bringing in the institutional investors, like pension funds, the National Wealth Fund and the British Business Bank. It’s about joining up the funding pipeline too.
As well as professor of swarm robotics, Sabine is president and co-founder of robohub.org, a nonprofit organisation and communication platform that brings together experts in robotics research, start-ups, business and education from across the globe.
Real-world deployments for swarm robots need them to work out-of-the-box at scale in messy places: think homes, agriculture, construction, environmental monitoring, agile manufacturing, city logistics.
As digital AI developments have gone from AGI to more narrow and collaborative agents (Agentic AI), I hope the same will happen with the move from general purpose robots to collective of more specialised and useful robots. Perhaps we can achieve general intelligence through these narrow agents, using swarm AI.
The key benefits of swarms are that individuals are narrowly designed and so easier to implement and test. As a result these robots can be made greener and more ethically tractable than more complex or general systems.
Intelligence is embedded in each robot, giving rise to a distributed system that can scale to small or large numbers without modifying any infrastructure – just take more robots out of the box.
New tools such as VLAs (Vision Language Action) may help push swarms into the realm of real-world feasibility.
Swarm robots need on-board intelligence to operate out-of-the-box using their own perception and cognition which makes them interesting for environments that are more challenging or lack infrastructure.
Most of our swarms are simpler agents that currently don’t translate to the existing humanoid focus of VLAs. Perhaps we should be training these models for a zoo of robots types with the idea that the world will be best served by robots with different forms, rather than aiming for AGIs in humanoid form.
Robots operating in the real-world need to do so safely and reliably.
In May last year it was widely reported that a robot in an undisclosed factory in China had malfunctioned, waving its arms around wildly and nearly hitting nearby workers. Robots are essentially very heavy moving computers, and hugely complex ones. Getting clonked on the head by one could be fatal.
It is why so many of the robots in use today, operate within constrained environments that are separate from human workers and changing that will be done slowly and incrementally.
It is vital that humans also trust the machines that are working or living alongside them. As well as having built-in failsafe systems, robot makers have to think about the design of their robots and the so-called uncanny valley effect – where the more a robot looks like a person, the less comfortable a human feels with it.
And there are more practical considerations too. While the components needed to make a robot including sensors, batteries and controllers, are falling in cost, they are still incredibly expensive to build. According to recent UK government figures, it costs between $40,000 and $150,000 to build a single humanoid robot.
Complex robots will need a lot of power. Finding light long-life batteries will be challenging. Figure’s humanoid robot has a docking station which it returns to when low on power and recharges from the feet up and this could become more standard for walking bots.
It will become increasingly necessary to have system integration in a robot-friendly world. Take a simple service robot that are often seen operating in hotel lobbies. To take that to the next level (quite literally) will mean the robot has to move between floors – so that it can deliver room service for instance. And that will require software that integrates the robot with the lift so that the two can talk to each other.
Having robots that understand and interact with their surroundings is an exciting prospect for businesses and society more generally but a complex one to make reality.
2026 is a year where this merger is likely to continue but the jury remains out on what the purpose of fully humanoid robots would be.
Keeping humans in the loop though will be essential as the industry seeks to gain trust from the public and prove that its robots can integrate into society.
One of the biggest challenges will be the need to upskill the workforce, both in terms of those building the robots and those working alongside them.
The world is currently experiencing an IT skills shortage and while vibe-coding (getting AI systems to write code) is helping fill the gaps, experienced coders will be needed in the robotics world more than ever.
For the last decade as a tech journalist, I have seen dozens of robot demos – often showing off dancing skills – and every one of them seemed to have an army of programmers behind it.
The time may be ripe to move on from dancing to perform more useful functions in the real world, and AI will play a huge part in making that a reality.
The Bristol Innovations Zone (BIZ) is a gateway for collaboration between industry and the University of Bristol – giving businesses structured access to deep-tech research, specialist facilities and emerging talent. Whether you’re scaling an existing programme or exploring new directions, BIZ is built for partnerships like yours.