Robots and humans will be working closer than ever in the not-too-distant future. Whether in private, commercial or industrial spaces, it’s imperative that robots understand where they are and what’s around them. Join this session as two industry experts discuss how they're helping robots better see the world around them – and adapt to it.
Ariyan Kabir, co-founder and CEO at GrayMatter Robotics, Maya Pindeus, founder at Humanising Autonomy and Owen Nicholson, co-founder and CEO at SLAMcore were in conversation with Pascale Davies, technology reporter at Euronews on the Talk/Robot stage at Web Summit 2022.
Thank you for listening! If you enjoyed this episode, please rate and review us on your favourite podcast platform.
Web Summit Rio is taking place in Rio de Janeiro on May 1-4. Get your tickets here before prices increase.
"The best technology conference on the planet".
Follow us on Twitter, Instagram and Linkedin.
Hi, everyone. Thanks so much for joining us today on our panel with esteemed guests. I want to kick things off with you Owen, in from SLAMcore, you're working on providing a standardised solution for robots and AR and VR is going to be at its core. So let me just start with, is it going to happen? Are we going to see variables with 2020 vision in the near future?Owen Nicholson:
Simple answer, yes, I firmly believe that's going to be the case. When I founded slamcore, six years ago, we had one fundamental mission, and that was to help companies to build machines with spatial intelligence that have vision at the core. And fast forward to today, we're now 50, strong 30 million raised, we have some deals in the biggest companies on the planet. And in the next couple of years, there'll be millions of machines with our code running on them in the wild. And the one thing I'm certain of is we're now entering a new age, this is the age of the specialty intelligent machine is going to make the Internet age look like just a bit of fun. And these machines are going to compete on their ability to understand space. And vision will be a crucial part of that ability to do that. And, and it's going to touch every element of industry, every individual in this room will have were impacted by this technology. And I think it will be a positive and profound technology that will make our lives better. And I'm really proud that my company gets to play a part in that vision. Pardon the pun.Pascale Davies:
All right, exciting times in my I want to come to you from humanising autonomy. So your focus is on computer vision that interprets human behaviour for a variety of applications from home mobility. So what does 2020 vision? What does that actually mean?Maya Pindeus:
So, well, for me, it just means two different things. So one is vision. You see, you see what's happening in the space. And the other part is you actually understand what is going on, you're actually interpreting anything that is happening. So with humanising autonomy, we focus on understanding, predicting, and interpreting human behaviour from any video camera or stream to help machines, Roberts, essentially any device, make better decisions take actions, it's a lot of human experience, user experience, ultimate safety in many cases, and kind of success rates can be taken from ability in a variety of production, devices, from consumer to fleet to logistics, but also entertainment home mixed reality. So essentially, when we look at 2020 vision, when we look at human machine interactions, wherever people are, there is a very, very high likelihood that there's some computer system software, trying to create an experience for us. And we want to make sure that this experience makes sense, is the right thing for you as a user. So when looking at 2020 vision for me it's about are those machines was robbed was actually understanding and interpreting correctly. What we as people are doing wanting to do, our intention is to create experiences that are enhancing us, enabling us rather than disturbing us.Pascale Davies:
Okay, Ariyan, you've a very exciting company, grey matter robotics. Now you're building smart, robotic assistants, that's helping humans do the tasks that we don't want to do, because they're a bit too tedious. So what stage are we at now? How clearly can robots See,Ariyan Kabir:
ya know, at grid better robotics, our complete focus is on you know, creating smart assistance, robotic assistance, so that we humans don't really need to do the tasks that we don't want to do. So if you really look at all the different things that we humans do by our hands, there are two broad classes of Operation one, you move things around began Place operation. The other one is all process application, which are really high skilled, think about, you know, cleaning, or sanding, or polishing or painting all these different tasks only in like 2% of manufacturing, automotive and electronics robots have been effectively used. But remaining 98% are all dependent on human labour, just because there's so much variations and variabilities. Now, for us robots need to have 2020 vision to be able to understand the world, understand all the variations and variabilities. Let it be a guitar manufacturing or a fighter jet manufacturing, and enable the robots to make decisions on the fly, do the job, and also guarantee quality consistency. So it's the vision and the whole perception stack that needs to play a role in every single stage and create a hosted solution that can create this autonomy that enables humans to focus on higher level tasks where the robots can focus on the tedious and ergonomically challenging tasks.Pascale Davies:
Okay, so we're gonna start with what stage are we at now? How clearly can robots see? And why do they need to have 2020? Vision?Owen Nicholson:
Maybe Yeah, maybe I'll jump in there to give us a bit of a framework to help answer that question. Because 2020 vision, it's not about whether a robot can focus at long distance. We see I broke this down into into four core building blocks of spatial intelligence, which is all about the first one is the ability for a machine to understand its position in space. And I don't mean that it's where it is in pecking order. I mean, that in the sense of, Well, my coordinates if I take three steps forwards will roll forwards by metre have I actually travelled a metre If that's your first building block, the second is, what's the shape of the world? And how do I get from A to B without hitting things? That's your second order? That's the mapping element. The third is the understanding of what those objects are. So that's an higher level of intelligence. So is it a person? Is it a floor? Is it another robot, and therefore you would modify your behaviour accordingly. And then the fourth, which is very much what admiring human autonomy of doing is understanding the intent, what's actually going on that we know what the objects are, but what they're intending to do. What's the the entire what why are they in that room? What what is the emotions going on in that room? If we get all four of those building blocks, then we have spatial intelligence, and to be honest, was still quite a long way down first and second building blocks. still hadn't really been commercialised if we're honest, of course,Pascale Davies:
when it comes to you now. So obviously, we always have the fear of robots taking over, maybe from our jobs, or you know, maybe something will go wrong. So should we be scared? Should robots have 2020 vision?Maya Pindeus:
I think we need to go away from the image that we have the robots like everywhere taking over every part of our lives, I don't think that will happen will be a bit weird as well. No, I mean, so I think 2020 vision is incredibly important. Because when I look when I talk about the machine, or Robert, this is so loosely defined for me, your phone, of course, the machine, and the AR headset as a machine, your hairdryer, the machine, your self checkout machine, anything around you your Alexa your doorbell, everything, those are robots, really because they have some sort of vision, like like go and introduce, like some level of vision, some have some sort of interpretation as well. But I think the usability and the comfort and trust for us, users, consumers, people will come from how well do those robots understand what is going on? And how well are they putting it to us, because none of us is willing to have a product, whether it's consumer or b2b product, whatever it may be, that is just alerting over alerting, recommending and so on will probably just shut it down like no, but if it's something that actually adds value, that I think it will be really welcomed. And then obviously, there's the whole ethical component of building something that is trust, and so on. And just to quickly, go back to your former question, I think it's very much because the area industry specific the certain advancements, I think, at scale, I have to do away with Oh, when we are at understanding space, understanding what is going on. And it's being put to use already in a lot of industrial scale, like mobility, manufacturing, and so on. But I think the what was the world is just getting ready. I've heard you were how do you actually how important is actually to integrate and to understand people? Because obviously, everything is made for us, hopefully, that's what it should be.Pascale Davies:
I don't want to come to you in the industry, because obviously, you're creating sort of the robots and the factories, right? So what is it going to change for these kinds of robots that are going to help us within the working life?Ariyan Kabir:
Yeah, so essentially, if you look at, you know, let's, let's take an example. For example, let's say, a boat manufacturing plant, right? In a boat manufacturing process, the temperate think about what's a coastal Florida, you're, you know, someone who's doing this job today by hand, they're sweating 10 to 12 hours every day of their life in more than 100 degree Fahrenheit, they are working with this heavy power tools day in day out. And between two to five years, they get carpal tunnel, shoulder and back injuries, sometimes respiratory problems and whatnot. It's a really, you know, challenging work environment. And no one really wants to do that. And on the other hand, the baby boomers are retiring, and the younger generations, they prefer a better quality of life, and they choose something else. Now, on the other hand, if there's no one to do the job, the whole industry is going to collapse. Manufacturing is the backbone of the economy. So we need autonomous solutions to be able to do those tasks. Now, when it comes to doing the task number one, as often in mind mentioned you really need to perceive the world once you perceive the world you have to the robot has to programme itself, it tends to make decision. So the 2020 vision comes in to enable the robot to make those decision how to do the job, once it programmes itself does the job during the operation, it also has to sense and figure out is it doing the right job or wrong job? Am I making the product with the right quality or not. So it's that whole complete feedback loop and the end of the day also creating that copy of digital twin or a digital trace to guarantee that hey, if you're the proud boat owner, you know for a fact that your boat meets all this quality specs. So combination of all that has to be present in a in a complete turnkey solution. However, at the same time, the system needs to work with the human operators because the human operators are the ones who are giving the high level instructions, encoding the preferences, high level preferences, whether this should be a fine finish or a matte finish or a glossy mirror finish. Right? So it's a team of humans, robots and AI that come together to essentially transform the future of factories.Pascale Davies:
Okay, that brings me to my next question. That's your kind of building the technology really. So what are the current challenges and giving robots to 2020 vision IOwen Nicholson:
think yeah, the it's there are One of the challenges is, is robots and these machines are physical they are. So therefore we have to deal with the complexities of the real world. So, sensor data is not always perfect. There's no standardised robot as there shouldn't be. There's no one standardised animal, which is perfect in every single environment. But what you do notice is most animals in fact, vast majority use vision as their core sensing modality, they might have a decent neural network running in the background to actually interpret that data. But the common the common sensor is the visual sensor, which runs on on many of these on many of these animals. So one of the critical challenges we see as an industry level is the fact that there is no standardised approach to be able to debate to interpret visual data into into spatial intelligence. And what's happening, a lot of the companies are building their own proprietary siloed solutions. So that means they might work really well on that one specific product. But as a warehouse owner, or it's in your home, your your your especially intelligent machines can't communicate and you don't get the best experience. So that's something we're really passionate. And what drives us forward in slam core is creating a unified spatial intelligence interface, not just for robots, but for headsets as well. Like I'm going to use the N word the metaverse is, is really just a digital layer that overlays on top of the real world. And robots need to create their own Metaverse to be able to interact and, and and move through it. So that's that's no surprise to say, because the biggest challenges the one that my company is trying to solve is the fact that division is not a universally accessible technology. And that's what we're looking to change.Pascale Davies:
Who am I? What do you think the challenges are for you also from an ethical standpoint, rather than just tack?Maya Pindeus:
So I think beautiful is that you like because I fully agree. So one of the big challenges is to transferability across domains. So if we were looking at space, like the stage here, or your living room, or your workplace or fulfilment centre or your street, now, it's not really, there's a lot of like vertical attempts, right built robots, machines for different industries, different management is also to some extent the right thing to do. But I think for AI and machine vision really work at scale, it requires a level of transferability. But you cannot do that with deep learning end to end system, because they're really rigid, they're very data intensive, you just, it's just not really feasible. So and this will lead me to ethics ethics as well, it's important to create something that is small, that is modular, it is adaptable, defensible, and interpretable. And at least reflect so you can understand the decision making of a system. But you can also take for instance, if I was looking at our you know, body gestures, behaviour, you can compute those and add greater almost like probabilistic assessment of what an interpretation would be when we're sitting on stage or working in a fulfilment centre based on modularity. And it also builds trust and really holds ethics front and centre, because it's explainable, I think the expandability not just in how tech is built, but also how it is communicated to the public will be crucial.Pascale Davies:
Great. So we've talked about the challenges there. I want to flip the question around her. And I want to ask, what would happen if we didn't give robots its 2020 vision? What would we miss out?Ariyan Kabir:
The whole future? Essentially, right? I mean, think about, you know, all the different things that went my, you know, talked about, and also remember what we're doing, these are essentially the way in the future, how we humans are going to interact with the world. And we need all these machines to enable us and elevate and empower us to focus on other higher value added tasks, and more of the creative aspects of life and creating as creative aspects of jobs. So if we don't essentially create this technology, or bring this technology to the world, and essentially create, you know, generational change, we can't really elevate ourselves as a as a species.Owen Nicholson:
Couldn't agree more. I believe this, this technology, we're talking about the ethics of it, the good thing, the good thing is, it's hard and it's moving slowly. So we can make sure that we make decisions and put the right regulation to make sure this is used in the right way. And then we're very keen to be part of that conversation. But if we, if we don't, if we don't allow this technology to advance, then the biggest challenges we face as a society, things like the climate crisis, things like sustainable agriculture, if you want to ever build a home on another planet, I know I'm sounding crazy here, but we are going to need robots that work at scale, and not just technically but commercially. So this we have to overlay the entire commercial business model investor side to all of this and that these are the technologies which are going to allow us to create solar farms the size of a desert that are maintained and built by robots not just by humans in collaboration with them. So yeah, I I'd say we almost have a moral duty to actually make sure this technology is is advanced, and it frustrates me so much how much time it's been spent reinventing the wheel by so many companies and that's, that's something I'm very passionate about.Pascale Davies:
Okay, I want to sorry, I'm gonna come to you again my question but reinventing the wheel. Why is that happening?Owen Nicholson:
It's, it's because sometimes we underestimate the challenges required to build a fully autonomous system to be absolutely honest, the we can build, my background is much more in project management working with from the academic community track, transferring technology into the real world. And we can see great demos at CES or even in robotic conferences. And we sometimes don't people might say the problem solved robotics are already here. The reality is the ones that actually make it to scale and don't just run out of cash, or never actually have an impact in the real world, there are very few probably can count them on one hand. So there is an underestimate of the complexities required to actually go from a demo or even something that scales to 100 units, 200,000 to two a million units. And that's something that we're just starting to change, we're starting to see modularity appear across the industry. But I think that's, that's one thing we need to accept as an industry is we shouldn't all be trying to do the same thing. Let's build on top of each other's components. And just like the automotive industry has different supply chains, and tier one, tier two, that's the direction of travel, we need to go.Pascale Davies:
So Mike, coming back to you. So how important is it going to be what happens if we don't give robots 2020 vision? And which industries are going to be most advanced in the sector? Do you think I mean,Maya Pindeus:
I can only agree with my co panellists here. And I have to say, I'm a big, big fan of breaking things down into smaller chunks. And I also think that's the future I wanted to say early as well, like, that's really not think of Robertson says, Robert. So we've been taught as children. And so I'm really not because it is something that is really ingrained into everyday life. And then also, because I mean, you mentioned, you know, companies trying to build whole full stacks and competing against each other, and then they fail, because it's actually really hard, but actually a much more modular and component based approach. I would argue that by taking a component based approach, they see a level of, maybe not 2020, but close 2020 vision or read in certain application, you can take cars, you have assistive systems that are really, really good and assisting the person. They don't automate. But they have pretty good vision. I mean, they have some behaviour in there as well. That's, that's those things that I find really amazing. So I do think it's, it's impossible not to have it or not to go for it. And in terms of industries, I mean, I think industrials, where manufacturing everything where you can really increase efficiency and operations, and then just make increased productivity, a huge. I do also think that broken down the mobility industry is big, not autonomous, but actually different levels of autonomy implemented into driver systems. And anything entertainment and Metaverse is huge, because that's really where so much progress is happening. Anything from augmented reality to gaming, you can really see the progress every day. And it's super exciting.Pascale Davies:
Okay, this is an exciting time. So I want to ask you just a quick fire question. How soon? Oh, when you answered this at the beginning, you said several years, you think it will happen? Can you be a little bit more precise? When can we expect that to happen?Owen Nicholson:
I think so how long this is gonna take I think, good question. Because as you're saying, there are elements where 2020 vision is already being deployed in the wild, it's just whether or not we're talking about level 1234. All along that along the way, we have great neural networks, which can detect humans running on cars right now. So I would say for full spatial intelligence, all four of those blocks, we probably are, I could probably count the years on one hand, but each finger represents a decade. And we're talking about that timescale. But over the next 10 years, the next five years, we are going to see an explosion of technology and products, which use elements of spatial understanding, maybe not full spatial understanding, just being able to get from A to B safely without crashing and lots of huge amounts of potential in this entire industry. So so probably five years on the short term, 50 years on the long.Pascale Davies:
Alright, how long would you give up there?Maya Pindeus:
I think I will find incremental approach. So I would say, for different applications. It's a one to three year actually here to three year to get a full specialty. Yeah, I agree. But that's I think maybe we shouldn't think of just, we will only be successful if we have a fully transferable automated spatial intelligence system, because that's really, really difficult. But if we actually able to take bits and pieces and enhance the human experience with that, I think that's happening now. And that's really what excites me.Pascale Davies:
Is your production and income. Yeah, no,Ariyan Kabir:
I think I agree with Maya, on the fact that we're gonna, we're gonna see more and more application specific solutions Much, much sooner, and then gradually, it'll transition into something generalizable. But application solutions are fine for now, because there are so many applications that we really need help and can use immediate help.Pascale Davies:
All right, well, let's say sadly, we are out of time, but it looks like someday soon we will have robots of 2020 vision, hopefully. Thanks so much to the esteemed panellists today. Thank you website. Thank you. Thank you so much.