Autonomous Systems - 4
Aggregazione dei criteri
Assistente AI
Trascrizione
00:01:480Alessandro Brighente: So
00:03:130Alessandro Brighente: let's let's let's let's
00:07:560Alessandro Brighente: resume from what? Where we started. Right? So
00:18:960Alessandro Brighente: okay. So last time we we were talking about how we can detect spoofing attacks or removal attacks on lighters based on how they they perceive optus, right? So the fact that when a computer vision algorithm detects an object. It doesn't care whether it's near or far where it's located. It just detects an object. And so we said that we use this free space detection and laser penetration, detection
00:47:367Alessandro Brighente: to to understand whether we're talking about a real object or whether it's spooked. So one of the things that we were discussing the side of which. we said, Okay, we'll look at the algorithm. Right? Is how to
00:59:590Alessandro Brighente: compute these spaces in here. Well, I decided not to go through the whole algorithm because it's a lot of stuff. But I can give you the reference to that. But basically, what happens in here. So why are we computing these these spaces?
01:14:800Alessandro Brighente: So let's think about the how the lighter works and how the computer vision algorithms associated with the points collected by the lighter. So what you have with the lighter is a physical
01:30:970Alessandro Brighente: So you're leveraging physical, prohibitive, right? So you have a laser bus that is going towards an object, and it can be reflected by the object, or it can either pass through the object right? So we had the the example of the window. If the window is open, my laser bus would just go to the window, and I don't know if we see anything on that. Or let's make a different example. So let's consider.
01:58:530Alessandro Brighente: okay, I'm looking at the car, for instance, the car has its own windows, right? So my laser bus may go through the car and bounce on an object that is behind the car. Right? So that's how I see, for instance, occluded objects. Well, this is what happens from the point of view of the lighter, right? Great. So now we have all these points, we have our point cloud. We give that to the computer vision algorithm, the computer vision algorithm computes its
02:26:730Alessandro Brighente: features and stuff. And it attacks objects. Right? So whenever we detect an object through our computer vision algorithm. What we define is a bounding box. Right? So we have these 3 dimensional box. That is what contains the object that we detect right? And so if we.
02:44:900Alessandro Brighente: if we detect a car, this bounding box will be a 3 dimensional viewpoint that contains the car right? So they estimate also the length of the car itself. Good. So we have not only the square that contains the image, but we have the 3D. Version of that right. So when we say we are behind the space of the of the bounding box, it means that we receive points from the lighter. Let's see
03:15:750Alessandro Brighente: behind the the actual bombing boss. So let's
03:20:760Alessandro Brighente: me
03:22:890Alessandro Brighente: on that picture to visualize this thing. Right? So let's suppose that this is our beautiful car.
03:29:380Alessandro Brighente: Right here we have windows.
03:31:650Alessandro Brighente: and then here we have another card.
03:36:10Alessandro Brighente: because what happens with the our lighter in here, right? We fire our laser pulses, and we get the reflections here, here, here, blah blah! And this one is that goes through.
03:47:950Alessandro Brighente: and we get the 1st reflection from the object in front of the car
03:51:870Alessandro Brighente: right? So this is what we get from the line. But then the computer vision operator will create a bounding box
03:57:710Alessandro Brighente: that contains this card, right? So in this sensor. From the lighters point of view, we have the points that sits in between the lighter and the bounding box.
04:07:980Alessandro Brighente: the points inside the bounding box which define our object. Right. They represent features would allow us to identify the car, and then we have these in here, right that are authentic behind the bounding box. And this is something that we can count right? So if we know that we are detecting a car, we can associate the bounding box, that it's something that is rather fixed, and then we can also count points that are behind the bounding boxes. We see them farther away, and
04:39:640Alessandro Brighente: partner
04:40:810Alessandro Brighente: from the
04:44:320Alessandro Brighente: the final line of this bounding box that we have in here. Right? So these are computations that we can use that we can do by reasoning on the points that we have and merging the results between the physics of the lidar and the information that we get from the computer vision algorithm.
05:03:573Alessandro Brighente: Just as a reference. Have you ever did any classroom, computer vision or this kind of stuff? You know how these things work? No idea. Right? So.
05:12:660Alessandro Brighente: okay?
05:14:180Alessandro Brighente: And that's something I should take into account next year.
05:18:580Alessandro Brighente: So
05:23:290Alessandro Brighente: yeah.
05:24:910Alessandro Brighente: so what happens with the with the computer vision algorithm is something that that we discussed because it it's useful also for the
05:32:520Alessandro Brighente: for the next detection algorithm
05:36:260Alessandro Brighente: that we use. So
05:38:430Alessandro Brighente: it's okay.
05:41:400Alessandro Brighente: Oh, it was not even that. Good.
05:45:154Alessandro Brighente: Okay? So from the computer vision algorithm, what we see is not the car, of course, right? We see a series
05:53:420Alessandro Brighente: of points. These are the points that we have to the lighter right? And these are all points that represent our even worse card.
06:04:640Alessandro Brighente: Right?
06:07:906Alessandro Brighente: Good. So we have all these points in here, right? And then we see other points. Maybe we see points in here. Maybe we see points in here.
06:14:910Alessandro Brighente: right? And so on. So forth.
06:17:820Alessandro Brighente: More importance. Thank you
06:19:690Alessandro Brighente: right? So why do we have more points in here when we have a smaller number of points in here? Well, because we are reflecting different kind of objects. Right? So if we have a card, we will have more reflection on this. If we are hitting a surface the lighter, there will be a higher density of points in there, because it's a compact surface right from from the whole surface we can get reflection. So we will see a huge number of points in here.
06:44:570Alessandro Brighente: then, what happens is that we also see reflection from from the ground, right? So if the lighter is pointing toward the ground, we will have a reflection from that one as well. But we will have a smaller number or reflection in here. We need to be
06:57:470Alessandro Brighente: somehow luckier if you want to get the reflections from the ground. The lighter is not supposed to get this kind of information right. So every now and then it gets those points, and then you have some random points in here. Right there might be error, or might be detecting smaller objects, or it might be detecting another object.
07:17:120Alessandro Brighente: So what the computer vision algorithm is usually doing is okay. I have all of these points. How do I differentiate between different objects? Right? So the 1st thing I could do is okay. I know that if there's a surface I will have a higher density of points.
07:35:300Alessandro Brighente: right? So all of the faces that I hit that are actual objects that exist in the 3D. World would be characterized by the higher density of points. Good. So this is basic information. How does that translate into something that we can compute?
07:51:572Alessandro Brighente: Well, the 1st thing that we can do is to cluster these points right? When we create cluster. It means that we create a group of points based on different
08:03:90Alessandro Brighente: policies if you want right? So what defines a cluster to me? Well, I see that there are a number, a high number of points in here. So let's say that I decide that this is my central point, which are the points that are
08:18:860Alessandro Brighente: closer than a certain distance to the central point, right? So all the points that are very close to this one might belong to the 1st cluster, and so I will have this cluster in here. And then these are points, I said, far away from the central point. So I will create another cluster in here, and same holds for this one. I will create another cluster in here.
08:40:409Alessandro Brighente: Good. So now these are my different clusters. Right? They potentially represent the different objects.
08:47:730Alessandro Brighente: But how I do? How do I get from these clusters to the actual definition of
08:54:340Alessandro Brighente: what they represent? Right? What are these objects in the end.
08:59:225Alessandro Brighente: Good. So in here we have many different algorithms that we can use
09:04:580Alessandro Brighente: if we want to detect something.
09:14:90Alessandro Brighente: okay, let's say that if we want to detect features right? Maybe features I'm interested in might be based on the, on the light that the object reflects right. I need to have access to the.
09:29:270Alessandro Brighente: to the image itself. But this is something that we don't have in here, so we cannot use classical feature detection. We should give these points to something that has learned how different objects reflect these these laser pulses from the lighter right? So that's where we have the the different
09:50:630Alessandro Brighente: deep learning, or whatever feature extraction algorithm, we might use them here. Right? So you know that when you have some features you can classify these features and create a mapping between this feature and the object that represent
10:05:820Alessandro Brighente: when talking about them.
10:09:110Alessandro Brighente: cars. Again. What we have in here is a higher density when we hit the part of the car or the vehicle in general. Right? So if there's another vehicle on the road that is located in here again, we will have a very high density of points in here that will represent the vehicle itself. Right? So we will get these reflections in in here as well. So that's very important, because it allows us to identify
10:38:180Alessandro Brighente: how different regions of space are or not occupied by by objects. Right. If I see a high density of points in a certain region, then I,
10:48:500Alessandro Brighente: with a with a certain confidence, can assume that there's an object in there great. So now we have a way of detecting where objects are right. So these were the boxes, then, are a result of the
11:03:320Alessandro Brighente: deep learning computer vision algorithm that tells us, okay, this object has a distribution that resembles the one of a car. Then a car has usually this shape, and we can confidently confine the car between these 3D.
11:15:640Alessandro Brighente: Among the books
11:17:660Alessandro Brighente: meeting 1st of July.
11:19:760Alessandro Brighente: which kind of object is in front of us, because if we have the object thing that is that we don't want to hear.
11:28:330Alessandro Brighente: Of course, my opinion
11:30:630Alessandro Brighente: it's for now think just enough that we have.
11:34:860Alessandro Brighente: and
11:35:800Alessandro Brighente: sense that so that we have object in front of us
11:39:420Alessandro Brighente: doesn't matter. That object is a truck, or a truck, or
11:44:380Alessandro Brighente: or whatever. We just want to keep distance from each other.
11:48:20Alessandro Brighente: Why do we need to know if it's like really a car, or
11:52:590Alessandro Brighente: because what you're talking about is a 2 dimensional representation of an object.
11:57:870Alessandro Brighente: Which kind of
11:59:820Alessandro Brighente: yeah makes the attacker's life easier, right? Because in that sense you can just put some some points in there.
12:07:810Alessandro Brighente: And then you see that there's a two-dimensional object. But all the things that we said up to now about I realize that an object has been spoofed or removed by the attacker doesn't work anymore because you don't have these depth vision about. Okay, I'm scooping points. Do they make sense with respect to the physics? If you have a 2D. Dimension, 2 dimensional representation of objects. You just see a bunch of points, right, a high density of points. And for you
12:35:540Alessandro Brighente: it might be, whatever you know that they are there. But you cannot realize whether it's something that the attacker created, or something that is actually an object. And if it is actually an object, and you're breaking fine.
12:49:900Alessandro Brighente: it's good that you did that if it's not an object, and you're breaking, and this makes the vehicle behind you. Crash into your vehicle.
12:57:670Alessandro Brighente: then that's bad. Right?
12:59:350Alessandro Brighente: So in order to have these.
13:02:400Alessandro Brighente: this kind of
13:07:330Alessandro Brighente: more realistic representation of how the the points reflected by the ladder should behave.
13:13:537Alessandro Brighente: According to physics, it's useful that we have these 3 dimensional representation of them. The other thing is the thing you were mentioning is okay, why do I care about recognizing what kind of object is in front of me? Right?
13:30:319Alessandro Brighente: Good. So if you have, if you are encountering an object on the road this object might be
13:38:640Alessandro Brighente: a paper sheet right? And do you want to break because there's a paper sheet on the road. No, you don't want to do that right. I don't care if I pass over a paper sheet. If I said, I pass over a person. That's bad, right? I don't want to do that. I need to be aware of what is happening on the road, because that allows me to take better decisions. Right? If you're the driver you want to.
13:59:180Alessandro Brighente: you would like to see what you're what you're doing on their own. The same thing applies for for vehicles, because then they need to take decisions right? What kind of decision should I make?
14:09:540Alessandro Brighente: Well, it depends on what you're encountering on the road.
14:12:330Alessandro Brighente: And besides providing these these benefits in terms of the decision making, then you'll have these bounding boxes. For instance, you have information on there on the size, typical size of certain objects, right? And then it's useful for for the physics of what you're looking at.
14:34:210Alessandro Brighente: And then let's say that usually these this computer vision stuff has been mostly developed for robotic applications. Right? So imagine, like, you are in industrial factory automation stuff.
14:48:350Alessandro Brighente: scenario. And then you want to to be able to recognize the different kind of products, the different kind of machines that might be moving. And so you want to be aware of the of the scenario that you're looking at. So
14:59:281Alessandro Brighente: the let's say the the gameplay of recognizing with the the object as an actual validity in it becomes useful for forecast as well for the decision making. But it's not. It's a native development scenario. I guess something becomes useful for for decision making. But it's something that
15:19:690Alessandro Brighente: came later. If you want the application of this kind of algorithms to car, it's something that happened
15:25:20Alessandro Brighente: way later than when they were initially developing.
15:40:10Alessandro Brighente: okay?
15:41:700Alessandro Brighente: And then.
15:44:350Alessandro Brighente: well, what we, what we can now realize is.
15:48:900Alessandro Brighente: if that is how the the computer vision, or
15:54:848Alessandro Brighente: the perception that the cars have a base on their lighter points, and how we can recognize objects. Well, you can understand that if we have the point of view of multiple cars driving the road. The
16:08:580Alessandro Brighente: then we can create
16:13:720Alessandro Brighente: better maps somehow. Right? We can have a better representation of what is happening, or the the possible objects, or the the way the mood of the shaped or
16:25:660Alessandro Brighente: put it in.
16:27:40Alessandro Brighente: Call it that destination right? I know how much distance I can
16:30:880Alessandro Brighente: can travel towards a certain direction. Well, so what they, what they proposed in the literature is a system like you have your multiple cars.
16:46:930Alessandro Brighente: Thank you. And all of these cars have their lighters right? And so what they do is to
16:53:640Alessandro Brighente: collect these these maps right. And then this guy is computing its own map. Let's call it n. 1.
17:00:900Alessandro Brighente: And thanks to vehicle communications, it's and it's smart
17:07:170Alessandro Brighente: to the second car right? So what the second car will be doing is somehow
17:11:970Alessandro Brighente: to merge it.
17:13:430Alessandro Brighente: In the 2 months. I didn't have a better representation of these these points that they
17:20:619Alessandro Brighente: that they're seeing right so something like, if I'm detecting an object, I will have even higher point density of the faces, if there is no object. For instance, I can realize it from multiple points of view, that the number of reflected points is very small, and therefore it's very likely that there's no object in there actually can have my decision making on the presence or absence of objects. But then the
17:48:870Alessandro Brighente: So when we're talking about these. We are talking about comparative, autonomous vehicle in the real sense, right? They are sharing information that they can actually use.
18:01:310Alessandro Brighente: What kind of information do they share. Right? So what is this map that they're sharing? Well, they can do that according to these 3 different methodologies in here?
18:18:776Alessandro Brighente: And we have these 3 different fusion strategy in here. We are talking about early fusion, intermediate fusion, and late fusion. Right? So the the main difference between these different approaches is, what kind of information do we share
18:35:850Alessandro Brighente: about the scenario that we're looking at? Right? So let's stick with the with the example of the lighter right? So when we, when we have our lighter, as we said before, we have the point cloud, then we have the feature, and then we have the final decision on the objects that we encountered
18:52:30Alessandro Brighente: on the road. Right? So this is exactly what is happening with these early fusion, intermediate fusion and late fusion with the early fusion. I will just exchange raw sensor data right? I can exchange the point cloud that I collected through through my Lidar right, and
19:09:390Alessandro Brighente: and do not provide any of my decisions to the other vehicle. I'm just providing you the data that I collected
19:17:130Alessandro Brighente: with intermediate fusion. Instead of, I take a step more in my decision making before sharing my
19:24:570Alessandro Brighente: my information with the other vehicle, right? So, instead of transmitting or sharing the point cloud. I transmit the the feature map. I did the intermediate product that goes from the Lidar connection
19:38:100Alessandro Brighente: sensor data and the final decision that I have on the object on the road. I just share the the features. Right? Why would I go from these model to these from the early fusion sharing to the intermediate fusion, sharing. Well, if I need to share the whole point, Cloud, I have a lot of data, right. I potentially need to send a lot of packets to the other
20:03:832Alessandro Brighente: to the other people, because I need to convey the whole point, cloud. I cannot decide to to remove some of these points. If, instead, I share with the intermediate fusion, what I share is just a smaller amount of data, right and just are condensing. If you want these point cloud into features that then can be used by a decision making algorithm
20:27:858Alessandro Brighente: and this provides a trade off right? So we are saving in terms of bandwidth. We we need to share less information with the vehicle, but at the same time I decreasing a bit the perception accuracy, because somehow, I'm losing information, right? I'm not conveying the whole point. Cloud. I am sharing the result of the 1st processing of this data. Right? So
20:56:310Alessandro Brighente: in in this sense, I'm losing part of the information.
21:00:470Alessandro Brighente: And then, instead, I have late fusion sharing strategy
21:06:71Alessandro Brighente: where I just shared the the objects. Bounding boxes. Right? I took my final decision. I said, Okay, to me, there's a dog in there. And then I share the bounding box of of the dog. Right? So in this case, I'm in terms of network traffic. I am just sharing a very small amount of information which is a bounding box
21:29:816Alessandro Brighente: and no
21:31:280Alessandro Brighente: other information than that.
21:33:360Alessandro Brighente: But you realize that I'm sharing my decision, which not necessarily is accurate enough for the other car, or at least we might not have the whole benefit of merging to different point clouds. Right?
21:47:590Alessandro Brighente: Yeah. It depends on which performance metric is more interesting for you.
21:56:650Alessandro Brighente: Okay, so here you have a depiction on how these different fusion model work. Right? So you have early fusion intermediate later. And basically with the early fusion. We say, we collect the data from the lighter and we send it to the receiver, and then the receiver will have its fusion module, which basically needs to take these different point clouds and have a suitable algorithm to to put them together, to merge them
22:26:380Alessandro Brighente: according, for instance, to the to the reference system of the of the different cars, and according to the to the motion model of the car, and they need to be synchronized on how to to be sure that they represent the same scenario
22:38:483Alessandro Brighente: so great. We have the future model that creates a unique point cloud, and we give it to the future feature, extraction, network, and then also detection and blah blah. So the the second part of the pipeline from feature. Extraction on is exactly this, the one that we had before with the with the single right. The difference is that
22:57:30Alessandro Brighente: at the point now we have has been
23:01:60Alessandro Brighente: is the results of the merging of different point files from different vehicles
23:05:327Alessandro Brighente: intermediate fusion. The 1st part of the pipeline is the same that we've seen before. Without a competitive internal speaker. Right? So we have the data from the network and the features is what we share with the other vehicles.
23:20:170Alessandro Brighente: So the receiver will have this user module, which now
23:23:620Alessandro Brighente: I mean, it needs to be different from this one right? It needs to be able to to merge features instead of the point clouds.
23:30:900Alessandro Brighente: and then we have our features, and the second part of the pipeline is what we had before of the detection and the results of the detection late fusion. We are running up to the detection in here, and we share our detection. So the fusion module now needs to be able to to merge the information about different decision from different cards
23:52:670Alessandro Brighente: and take a final decision based on that right. So, for instance, we'll compute the results of adding together the different bounding boxes.
24:08:800Alessandro Brighente: Okay? So you see that we have
24:12:354Alessandro Brighente: fundamental security problem here. Right? So as we said before, we have many different standard that enable secure communication on vehicles. Right? We have a vehicle to everything communication that
24:27:60Alessandro Brighente: might be enabled with authentication,
24:32:140Alessandro Brighente: integrity, check, access, control mechanism. Right? We, we have secure solutions to ensure that only legitimate people can communicate. But then the problem is, how can we ensure that the data that we receive from the other people is actually legitimate?
24:46:580Alessandro Brighente: So what we are talking about now is data fabrication attacks. Right. So what do we mean by data fabrication attack?
24:54:760Alessandro Brighente: Well, more or less, the objective is the same that we had before when we had an attacker that was spoofing points. But now we make the attacker's life even easier, right? Because the attacker can simply generate these these fake objects or remove some objects and convey this information directly through these comparative scenarios.
25:18:760Alessandro Brighente: Right? So we have this map, one in here that has been created by the attacker, for instance, and the attacker. Includes in this map an object that does not exist. And now this vehicle receives this map and detects the presence of the object right? Whether it's in the Point cloud or in the mountain box, or whatever the other car would see that there's an object in there.
25:42:590Alessandro Brighente: And how can this car know if it can or not trust the data that we received from the other car, the one from the other car? Right? How can they verify the actual presence or absence.
25:58:280Alessandro Brighente: all these
26:01:70Alessandro Brighente: these objects.
26:04:400Alessandro Brighente: And then the other problem is.
26:07:270Alessandro Brighente: well, if we wanted to use something like what we had in Carlo, right? The physics of the objects.
26:14:480Alessandro Brighente: Well, at this point.
26:16:710Alessandro Brighente: not even they're worried about that right? The the adversary is perfectly able to to create these point clouds optimizing them to adhere to the physics rule of the lighters. Right? They can create that of mass.
26:31:930Alessandro Brighente: And furthermore, they can also collude with one another to create these fake office. Right? So here we're looking at 2 cars. But let's say that we have a network with many different cars. Right? So if we say that these 2 are attackers, and this is the victim. And this is the legitimate
26:51:690Alessandro Brighente: user, let's say all of them are sharing their map.
26:55:830Alessandro Brighente: But then these 2 vehicles can include one another right? They can agree on an object they wish to fulfill and include that accordingly, in their in their maps and share them through this vehicle. Now the this vehicle will see that these maps in here are coherent with one another.
27:15:670Alessandro Brighente: and that makes the the victim's life even harder. Right? How can we decide that these is this is fake, that this is an object that does not
27:24:200Alessandro Brighente: exist?
27:25:860Alessandro Brighente: Right? So we would like to define how these attacks might work right. How can we create these fake maps? And then how to defend against these
27:37:510Alessandro Brighente: this kind of attacks.
27:41:480Alessandro Brighente: How likely is this to happen in real scenario or in practice? Because for now it seems like purely theoretical, like just
27:50:570Alessandro Brighente: it can happen. But like. I don't see
27:53:590Alessandro Brighente: the security of trainings.
27:55:880Alessandro Brighente: because I cannot imagine that I'm driving a car and-, and my car is autonomous and taking information from the others, and just from somehow something disappears like magic that we did.
28:08:910Alessandro Brighente: It's just
28:11:320Alessandro Brighente: yeah. The problem with this is that we we are not living in a in a place where we have this autonomous car. Right? So ideally, these guys are.
28:24:00Alessandro Brighente: we're talking about fully autonomous car. Right? So you're not even driving. You are inside the car, and the car is taking its own decision right? So, despite the fact that you see or don't see an object when the car doesn't care about that, it's taking its own decision. Right? So if from the point of view of perception, the object disappeared.
28:42:420Alessandro Brighente: it disappeared.
28:43:630Alessandro Brighente: It's not the the magic like, I don't see the object anymore. It's something that the car doesn't see anymore. And that's actually a problem the other thing.
28:55:990Alessandro Brighente: and that the deals with the fact that we don't have these cars right.
29:01:700Alessandro Brighente: If we don't consider these kind of things now, when we will have actual cars driving in the even in this city.
29:09:260Alessandro Brighente: And then that would be kind of a problem. I mean, because at a certain point I can just share malicious data.
29:16:400Alessandro Brighente: and then cars will start to crash.
29:19:50Alessandro Brighente: And I would really not like
29:22:356Alessandro Brighente: that weapon.
29:23:916Alessandro Brighente: Yeah, somehow. From this point of view I I agree with you in the sense that
29:30:460Alessandro Brighente: it's very difficult to have the feeling of this right? Because we don't have this technology. It's not like I'm telling you about an attack that happens on the on the smart because we don't have autonomous cars. But.
29:42:470Alessandro Brighente: on the other hand, though
29:44:40Alessandro Brighente: why don't we have autonomous cars yet, because they're considered not secure.
29:49:210Alessandro Brighente: Right? So if we don't solve the both the safety and security problems of autonomous cars, and that's not something that we will
29:58:330Alessandro Brighente: see, or very likely see in even in small cities, or something like that.
30:06:560Alessandro Brighente: Now, this is not something that has been created a Doc, for this scenario. Right? It's not something like, okay, we want to come up with the with the security problem. And let's imagine that the the cars are actually doing this right, because it's convenient for us to to come up with these kind of attacks, because
30:25:430Alessandro Brighente: make out this is something that they have on actual test beds. Right? So what they say, okay, in this Geofence area, we are having autonomous cars. I saw something that happens, for instance, with biafolio in China or these other environments where they actually have fully autonomous cars.
30:44:590Alessandro Brighente: Right? This is something we are using, because, sharing this kind of information really improves the
30:52:30Alessandro Brighente: a decision-making capabilities of cars. You can really have very good estimation of of maps or
30:58:840Alessandro Brighente: Marxism when I'm talking about Marxism.
31:02:950Alessandro Brighente: Not like I just see what's on the road. Right?
31:06:600Alessandro Brighente: A very stupid example. The the vacuum cleaner, the autonomous vacuum cleaner which you might have at home right? The vacuum cleaner needs to cover the whole room, because if it doesn't cover the whole room, it's bad, right? I mean, it's still dirty, right? How does it do that.
31:23:225Alessandro Brighente: What it does is the 1st thing to to navigate the room. And thanks, it builds a map of the room right? So this has the navigation of the autonomous speaker right? It knows where to go in order to to cover the whole room right, because it knows the shape of the room.
31:43:310Alessandro Brighente: Great, exactly the same thing is what these autonomous cars need to do right. So if you're talking about the geofenced area, and we have an autonomous vehicle that needs to drive in there. Great. The autonomous vehicle needs to have a map of the Geo fenced area and something that it it collected right? And so having these autonomous thing is in sorry these
32:03:740Alessandro Brighente: collaborative, because in here helps to build a matter map that right, you cannot avoid losing features, because if you lose features, you will lose in accuracy of a navigation. And this is something that, that, they show these
32:20:120Alessandro Brighente: actually helping in increasing the performance of the speakers.
32:24:340Alessandro Brighente: And then as a security practitioner. Right? What happens? Security researcher, at this point, what happens is that I look. Robotic confidence tells us that this is what happens for their vehicles, and they show it works.
32:42:830Alessandro Brighente: How can we treat them right? No, no. The point is, how can we exploit
32:50:260Alessandro Brighente: these?
32:55:100Alessandro Brighente: yeah, these fusion models, basically of different data that we get from
32:59:10Alessandro Brighente: from different vehicle is traditional our traditional security measures. Sufficient. Right? Traditional security measure means. Okay. I only talk with, because I trust.
33:10:550Alessandro Brighente: when I only talk to, because I trust it means that I am authenticated to them. But then this does not solve the problem at all because of the things that we just mentioned. And
33:23:860Alessandro Brighente: and this brings us to one other problem. Usually,
33:28:790Alessandro Brighente: we can have 2 big clusters, right? People that build technology and people that do security right? Then they need to talk to one another, because otherwise these these things might might be a problem, right? Because if I'm someone that is dedicated to to robotics and computer vision. I don't care about security. That's not my problem.
33:49:280Alessandro Brighente: And I care about the algorithm being able to detect good features to provide me a better and better and better map, because it has my navigation proposes.
33:59:820Alessandro Brighente: But then I don't care. How do we could share the information as long as I have that information that is useful for me to to build them up.
34:08:670Alessandro Brighente: and that's that. That's how we get to
34:11:760Alessandro Brighente: this thing is in here.
34:14:570Alessandro Brighente: What's these answers to your question.
34:20:340Alessandro Brighente: Meet it again.
34:22:30Alessandro Brighente: I
34:23:600Alessandro Brighente: I I feel your concern right. It's not something that we are seeing seeing every day or using as actual users.
34:43:940Alessandro Brighente: Okay? Good
34:46:550Alessandro Brighente: Now that
34:49:929Alessandro Brighente: we likely agree on the fact that this might be a problem. Good. So how can we? How can we formulate this attack? Right? So what does it mean for us to to create such an attack? Well, it means that we should be able to create a map here.
35:09:790Alessandro Brighente: Let's call it a map. Okay.
35:13:590Alessandro Brighente: in our case, the map depending on the fusion model might be the point Cloud. The map might be the the set of features or the map might be the the bounding boxes. Right? We we need to to create this information and share it with the other, because such that the
35:31:320Alessandro Brighente: anomaly detection algorithm that we've seen before that is useful to detect whether an object has been spoofed. The lighter's point of view is not working anymore. Right? So how do we? How do we generate these maps? Right? How can we go from
35:49:970Alessandro Brighente: a legitimate map to something that contains us with the object or something that is able to remove these objects. Well, we can formulate an optimization problem. Right? So just so to give you that
36:06:890Alessandro Brighente: a former presentation
36:09:516Alessandro Brighente: what we have is what you've seen here. Right? So let's say that at each time instant
36:18:340Alessandro Brighente: the ladder is collecting is a point cloud, right? It's collecting the a frame representation
36:25:140Alessandro Brighente: of the of its environment, right? And it's sharing that with the with the other vehicles.
36:31:650Alessandro Brighente: Let's say, here we have these
36:37:160Alessandro Brighente: these ones in here. Right? So we have the attacker vehicle, and then we have a generic vehicle in here, and the I is the time index of the
36:46:630Alessandro Brighente: the the lighter frame. So what happens is that all the lidar frames that come at time E or I that belongs to time. I are merged into with one another.
37:00:150Alessandro Brighente: Okay, so let's say that as we mentioned before, we might share it on cloud, we might share and share the features, or we might share the the bounding boxes that
37:11:660Alessandro Brighente: that we detect.
37:13:760Alessandro Brighente: Okay. And so we have different functions that denote the denote the post-process data or the the pre-process data. Right? So the point cloud, or some of the features that we have
37:27:16Alessandro Brighente: so in here. You have the description of what happens with the
37:34:110Alessandro Brighente: a legitimate collaborative perception.
37:37:60Alessandro Brighente: Okay, so
37:40:440Alessandro Brighente: what happens in here that all of these vehicles are sharing their map and time eye, right? So they're sharing either their features or bound boxes whatever. And then we have this function, gene here that takes all these representation and create a single representation of that.
38:00:970Alessandro Brighente: So what the attacker aims at doing is not to share its feature now its own box. But to share these this feature mapping here plus a perturbation. Right? Let's call this perturbation. So the point for the attacker now is, how do we?
38:24:800Alessandro Brighente: How does the attacker compute such a perturbation that behind in here
38:30:601Alessandro Brighente: such that it causes a wrong detection at the car side, but at the same time it respects a certain constraint set in here. Right? What is this constraint set? We said before that we want to generate something that makes sense according to the physics of the environment. Right? So it means that I cannot create a weirdly shaped object that doesn't make any sense with respect to the how the lighter works, or that does not respect the the collusion pattern
39:00:730Alessandro Brighente: occlusion patterns. That we've seen for the lighters right? And that needs to make sense, because otherwise solutions from the state of the art already work in detecting that that object is is faking here.
39:13:580Alessandro Brighente: Okay, so great, how do we do that? So here you have a a representation. So this I in here
39:22:480Alessandro Brighente: is the objective function of the optimization algorithm. Right. It's something like I want the victim vehicle to detect the presence of an object, or I wanted the victim vehicle to not detect the presence of an object, and this time here is the variable that you can adjust right? So the perturbation that the attacker is placed on top of these
39:47:180Alessandro Brighente: shared pain, and this in here is the is the set of constraints that
39:52:20Alessandro Brighente: we just described about the the shape of the object.
39:56:240Alessandro Brighente: Good. So which kind of attacker do we assume? How do we?
40:03:860Alessandro Brighente: What can an attacker do? So? The 1st thing is, we need an attacker that is able to control at least one of the vehicles, right? So it means that the attacker is somehow the vehicle itself. And because, if that's not the case, we cannot modify the feature maps as at will. Basically, right? We will need to break the cryptography or break the integrity of the package or whatever we, we cannot do that right? Because we said that we already have problems
40:33:237Alessandro Brighente: that help in detecting such kind of signal issues. So okay, we control the people that it means that we can create whatever feature map that we want. And we we can directly manipulate the data that we we want to share.
40:50:320Alessandro Brighente: But at the same time we have the attacker vehicle, but also some other vehicle that the attacker cannot directly control. Right? So.
41:00:500Alessandro Brighente: of course, if the attacker can control all of the vehicles beside the victim. One.
41:05:990Alessandro Brighente: It will not be
41:08:788Alessandro Brighente: realistic, somehow. Right? How can we attack a controller, all of them. So the point is,
41:16:560Alessandro Brighente: the attacker doesn't need to do that right. So the attacker can only change its own picture map right and send to to the other vehicles and impact the perception that these vehicles actually have. Right? So, despite the fact that the attacker might control a small number of vehicles, and there there are legitimate vehicles still. That would be perceived.
41:42:170Alessandro Brighente: We'll we will still have problems with the perceptual side of
41:46:190Alessandro Brighente: perception algorithm of the victim's side.
41:51:870Alessandro Brighente: Good. So what are the constraints when we defined our so just once, we'll notice.
42:00:460Alessandro Brighente: I think would not have a break today, and we finish 10 min before.
42:05:810Alessandro Brighente: because they need to leave. So
42:08:680Alessandro Brighente: I think we can
42:10:750Alessandro Brighente: move on with this.
42:12:476Alessandro Brighente: So when we said, when we talk about the optimization problem. We said, that we have a set of constraints that we need to respect when generating our perturbation. Right? So here are the the constraints that the attacker has to avoid being easily detected by one of the messages that we said before.
42:35:557Alessandro Brighente: So the 1st thing is related to the sensor physics and the the definition changes
42:41:720Alessandro Brighente: right? So we have certain rules in terms of data format right? We cannot simply share whatever data that that we want. It needs to be something that complies with what the other things would expect, of course. And what does it mean? It means that the point clouds. For instance, if you're talking about early fusion algorithm, we are sharing point clouds. We cannot share whatever point cloud that we generate right. The point cloud needs to
43:11:528Alessandro Brighente: needs to have a certain distribution, right? It needs to be be reasonable. Otherwise, it's
43:19:60Alessandro Brighente: it's very easy to detect that.
43:22:840Alessandro Brighente: For instance, collusion, occlusion effects have not been respected, or that the reflection angle that we have with the lighter do not many not make any sense with respect to the lidar configuration. Right? So if I'm sending
43:42:90Alessandro Brighente: let's say that all the the vehicles have the the lighter at the same height. Right? We know the reflection angles that we expect from from this laser process, and how they should bounce back to the receiver. If this is not expected. That means that
43:57:480Alessandro Brighente: something is wrong in there.
44:00:185Alessandro Brighente: And then the second, constraint is targeted attacks, right? So the attacker needs to have
44:06:590Alessandro Brighente: a certain target region to create the objects or to remove objects. Right? So it it's not something like we can generate them. A non existing vehicle wherever you want in the space, or that that is
44:21:190Alessandro Brighente: somehow distributed in the space or whatever it means. No, it needs to be localized into a certain region, because if we can do that, then we can have control over the decision, making capabilities of the victim vehicle. Right? So there, for instance, the Spoof vehicles should be placed in front of the of the victim car. Right? So in that sense, we're talking about
44:41:880Alessandro Brighente: targeted attacks.
44:45:170Alessandro Brighente: and then real time. Temporary constraint is the if you want the most difficult constraint.
44:53:420Alessandro Brighente: what happens in here? So every periodically, we collect our feature maps from from the liner right? The point cloud of the features of the bounding boxes. Whatever we collect information from the liner, and then we need to share it with the with the the other vehicles right? But if we want to have an impact on
45:13:330Alessandro Brighente: the vehicles.
45:15:60Alessandro Brighente: the big thing, because decision making capabilities now, how can we generate a perturbation in real time and share it with the with the other people in real time as such that we we actually cause an impact. Right? That that's
45:28:80Alessandro Brighente: that's not possible.
45:29:890Alessandro Brighente: Good. So what does it mean that we need to to respect your time constraint? Or how can we
45:37:640Alessandro Brighente: respect these real time constraints? Well, the 1st thing is that we have our optimization problem that generates the the take point cloud, for instance. And this is something that needs to run quite fast, right, and for sure it needs to be finalized and shared with the other vehicle the victim vehicle before that vehicle actually generates its own frame. And there's a decision
46:05:60Alessandro Brighente: on that. The
46:07:260Alessandro Brighente: okay. And so it means, what is the effect of this thing? It means that I need to perform some prediction right. I don't know the the victim's frame when it will use the malicious recovery one right? So I need to perform some prediction, and I need to predict what frame the victim will look at at the time of the attack
46:28:473Alessandro Brighente: such that they can craft a malicious shape that would actually make sense for it.
46:33:480Alessandro Brighente: for the for the victim. People
46:41:70Alessandro Brighente: good. So how
46:43:430Alessandro Brighente: do I do that? Well, for instance, I can use some motion tracking algorithm, right? So I know, for instance, that it's going to take a certain amount of time for me to to generate my my shape and shaded with the other vehicle, so I can already place
47:00:950Alessandro Brighente: pizza scoop object
47:03:800Alessandro Brighente: in the location that will make sense with the time passing right? Or if the example works, work works better. If I remove an object. If I want to remove an object, I need to remove the object. When
47:28:640Alessandro Brighente: let's say that this is
47:30:870Alessandro Brighente: my image at time. T. Right, and there's an object in here, and I want to remove the object.
47:38:640Alessandro Brighente: But if this frame is received a time, t plus
47:45:460Alessandro Brighente: delta whatever from the victim vehicle where the the oxygen was removed.
47:51:950Alessandro Brighente: has moved in here, despite the fact that I'm sharing something where the object has been removed.
47:57:780Alessandro Brighente: The object is still there. Actually, I'm not removing
48:00:920Alessandro Brighente: anything. I would not match with it
48:04:690Alessandro Brighente: temporary behavior of the object itself. So what the attacker needs to do at time. T is to predict where the object will be at time. T plus delta. Right? It's going to be here and cancel it out from these. Here, then, share these right? Such that the actual existing object is, is removable. Right?
48:27:820Alessandro Brighente: So I need to to perform these prediction. How do I know this data
48:34:800Alessandro Brighente: value here. Well, usually these these operations that the lighter performs are periodic, right? We said that the the lighter rotates with a certain periodicity. So it means that we get a point cloud every team whatever. Second, right? And periodically, I will not be able to do that right. So I can predict how much time
48:55:580Alessandro Brighente: it will take for a successive route to to happen.
49:04:130Alessandro Brighente: Good. So
49:07:480Alessandro Brighente: This is already part of what is called in here the 0 delay attack scheduling right? So we need to to spoof objects or to remove objects to have these
49:23:340Alessandro Brighente: trajectory computation. I already try to understand what else is moving.
49:28:70Alessandro Brighente: and how can I know the direction in which the object is going to move? Right? So at the attacker side. I will have a
49:36:450Alessandro Brighente: a series of images, right? All belonging to the same scenario, where I see that the object is moving right where?
49:45:00Alessandro Brighente: Where? And I can predict that as the successive timeframe in this direction, because that is what has been doing
49:54:490Alessandro Brighente: in the successive in the previous frames. Right? So I can perform this kind of prediction. But that means that I need to parallelize 2 different here. Right? At the same time, I should be able to sense the environment and to generate something that is going to to remove or spoof an object in the final image I would share
50:16:890Alessandro Brighente: with the with the Linkedin weaker? Right? So that's what it means to parallelize the generation and the perception processes in here.
50:29:130Alessandro Brighente: Good. So we need to be somehow aligned with how, with what happens at the victim side, and.
50:48:50Alessandro Brighente: Alignment. It means that the different frames that I sense.
50:53:290Alessandro Brighente: or I collect with the with the lighter.
50:57:380Alessandro Brighente: agree with one another
50:59:310Alessandro Brighente: with respect to the the location of the object. Right? So it's not like, I want all the frames to be received at time. E equals 10, whatever the victim vehicle? No, what they want is that the feature maps are synchronized with one another, so so that the object sense by one vehicle and the object sensor by the other vehicle.
51:23:110Alessandro Brighente: are in the same location with the region with a reasonable precision. Right? It should more or less be there. That's what we
51:31:166Alessandro Brighente: what we mean when we're talking about the synchronization between the the different months here.
51:45:660Alessandro Brighente: Okay, so
51:48:470Alessandro Brighente: how much time do I have in order to perform this computation. Now, right? So
51:54:294Alessandro Brighente: I know the current scenario. I know that if I share my frame at this time instead with the with the other vehicle, it's going to take a certain decision and will
52:07:310Alessandro Brighente: compute a certain alignment.
52:09:280Alessandro Brighente: But in order to predict how to spoof a vehicle or how to remove a vehicle.
52:14:980Alessandro Brighente: I don't have all the time in the world right, because I need to make my prediction, and my prediction on the location of the object should be reasonable. I need to know if I predict the location of an object too far
52:30:170Alessandro Brighente: in time. Maybe I'm not going to guess it correctly. Right? So the the most time that I have is a single Lidar collection period. Right? So I'm collecting my lidar frame now. I should now compute where the object is moving such that I can create the fake object or remove the object in the next frame.
52:51:690Alessandro Brighente: I'm going to share such that I'm confident enough that I would not lose the alignment with the victim Beacon, and that I will generate a point value which is reasonable to with respect to the movement of the of the object. Right? So when we talk about the movement of the objects, of course, we're not talking only about the spoofed object, for instance, right? Because.
53:18:90Alessandro Brighente: again, we are not sharing the information only on the Spoof object. We are sharing information about the whole objects that we see in our scenario. Right? So let's imagine that this is our scenario, and we know that there's a car in here, another car in here, and they want to spoof another car
53:38:453Alessandro Brighente: in here, whatever right these objects still are detected by all the other vehicles detected by me as the attacker. Right and the receiving people will check for the consistency over the location of things as well. Right? They need to be here. So if they move too much in this scenario, I can create a very good shape for the for the attacking people. But then, the receiving car, will
54:06:90Alessandro Brighente: we see that this framework is not aligned, and we'll just discard that right?
54:10:470Alessandro Brighente: It depends on the latter implementation, because no less than that. So
54:22:40Alessandro Brighente: to understand if it's something
54:24:700Alessandro Brighente: reasonable to believe that.
54:31:670Alessandro Brighente: So
54:34:240Alessandro Brighente: it really depends on implementation of the liner. Let's say half a second.
54:42:480Alessandro Brighente: It's a number depending on
54:47:901Alessandro Brighente: so yeah, you need to be able to run your algorithm
54:52:390Alessandro Brighente: kind of that suite? Yeah, yeah.
54:58:980Alessandro Brighente: Okay, yeah. No. You're talking about the the tracking of the object. Yeah, yeah, it's not like you.
55:18:50Alessandro Brighente: So it's not like you generate.
55:20:760Alessandro Brighente: I'm gonna say.
55:25:310Alessandro Brighente: okay, it's not something like, I have never seen a car before. I need to get to the shape of a car starting from around right? So I start from a blob like stuff. Now, this globe, like stuff need to. Then look at. So go and optimize this. That's not what happens. Right? I know which kind of object is proof, because I have my own computer vision algorithm that is working the same as the other cars. So I will start from something that is reasonably
55:55:220Alessandro Brighente: okay. Start with something that is reasonably
55:58:550Alessandro Brighente: account right? And then optimize the same link. But in order to optimize something that is already very close to the final shape of what should be there.
56:05:940Alessandro Brighente: It's definitely easier than than starting from scratch.
56:21:930Alessandro Brighente: Okay? So now, the way in which we create these these fake objects or make objects disappear if you want. Depends on which kind of fusion algorithm, we're using right? So in this one in here. We can talk about black box ray testing right? And we consider early fusion collaborative system. Right? So when we talk about early fusion, it means that we are sharing the point cloud with the with the other vehicles.
56:51:950Alessandro Brighente: Right? So I can perturb the the point clouds directly I can move the points around them
56:57:600Alessandro Brighente: in a way that is convenient for me to let them. There is a legal that this object is actually or not there.
57:07:860Alessandro Brighente: Good. So how how do we go?
57:11:530Alessandro Brighente: From the idea to generating this this point? Right? So we have 2 insights in here that we can use.
57:19:890Alessandro Brighente: And this is exactly what we were talking about before. And so if
57:25:100Alessandro Brighente: there's an object. Then we'll see a higher density points
57:30:270Alessandro Brighente: on the surface of that object. Right?
57:35:310Alessandro Brighente: So what does it mean? It means, for instance, we saw in one of the algorithms that we presented during the last lecture, that we might have boxes right.
57:46:290Alessandro Brighente: thanks to which I can segment my space right. And if there's a higher density of points in here represent an object. It's more likely that in this region I will detect the presence of an object right? So that's the 1st thing that I should consider when generating my points.
58:06:820Alessandro Brighente: And then
58:09:350Alessandro Brighente: the the other consideration is that a higher?
58:14:40Alessandro Brighente: That's your point, sir.
58:17:110Alessandro Brighente: can also be correlated with the hardware coverage on the object surface. Right? So not only I want to have a huge number of points in here that should represent the car. But I want this huge number of points to be distributed on a surface that is high enough
58:34:410Alessandro Brighente: is large enough, such that it's more representative of the objects that is going to be detected right? So that's the other thing. Not only should generate a large number of points, but I should generate these points on a surface large enough, such that it's recognized as the object I want to spool for
58:55:910Alessandro Brighente: in there.
58:59:570Alessandro Brighente: Good. So given that we have these wing sides. What changes when we want to spoof an object, or what happens when we want to remove an object? Well, when we want to spoof an object. Of course, we want to create a denser cloud of points
59:21:781Alessandro Brighente: such that again, it covers an area large enough to be representative of the officer of the Spoof. If instead, they want to. To remove an object, it means that they need to be able to to obscure a surface
59:39:550Alessandro Brighente: enough, such that it's not representative of the object anymore. Right? So it means that I need to be able to delete all these points
59:47:80Alessandro Brighente: in here.
59:50:950Alessandro Brighente: Good. So how do we? How do we do this? Well, we have algorithms to to basically check how the the laser process would be reflected by the objects on
00:07:340Alessandro Brighente: in a room, for instance.
00:09:374Alessandro Brighente: And this is called the raycasting in here. Right? So recasting is something that helps the attacker to ensure that the point generates are not based on non-existing shapes, but it's based on something that actually exists. So to give you an example
00:28:160Alessandro Brighente: on how ray casting work. you can imagine exactly the inner working principle of the liner. Right? So you have something that is rotating around, and is sending laser pulses and measuring the the reception of this well with ray casting. We are doing something similar, but we are doing that in a simulated environment, right? So it's actually simulating the the behavior of the liner. So it's something like we have our own.
00:56:780Alessandro Brighente: Let's make the example of a room. And these these objects in here. Right? So we will have our device in here. It will send.
01:05:480Alessandro Brighente: Raise all around and see the the responses that you will have from from that tool to create a reasonable amount. Right? So it's basically a simulator of the lighter, or
01:18:650Alessandro Brighente: in our case.
01:21:420Alessandro Brighente: Good. So
01:23:590Alessandro Brighente: how do we create the the shape of the object? Right? We should create a 3D. Model of the object that we want to Spoof, for instance, such that we can encode it in the point cloud
01:36:800Alessandro Brighente: good. So we know how these objects should be represented from the computer vision algorithm point of view, right? The thingy that we said before, if you want to spoof a car or less, we know the shape that the car should have, or at least how the points the point out are distributed on the surface of an ideal car.
01:57:780Alessandro Brighente: Alright, so that is what we do when
02:03:330Alessandro Brighente: when we
02:04:990Alessandro Brighente: we want to spook an object. If instead, we want to remove an object, what the attacker might be doing is creating
02:14:710Alessandro Brighente: it's called universal adversary shape.
02:18:540Alessandro Brighente: So what is the universal desired shape is a shape that is going to be misclassified by the the computer vision algorithm. Right? So here we are exploiting precisely the the way in which the
02:33:360Alessandro Brighente: computer vision algorithm takes decision. We know that if we give it a shape with a certain properties for sure that it's going to not to be able to classify it correctly, and will not be able to detect the presence of an object that that belongs mostly to the how adversary machine learning works.
02:56:160Alessandro Brighente: You're not going to cover that here, but if you want it.
02:59:140Alessandro Brighente: I can give you some reference for that?
03:02:280Alessandro Brighente: Okay, so
03:06:970Alessandro Brighente: trust me that these universe shapes are not going to be classified by the computer vision algorithm. And that's exactly what the attacker destroys. Right? So I know that if I am the attacker
03:19:30Alessandro Brighente: I cannot physically remove points, but I can add points to a certain region to
03:25:910Alessandro Brighente: to lend to miss specification of these points.
03:33:980Alessandro Brighente: Good. So now we say that we have a targeted attacks in terms of of
03:40:40Alessandro Brighente: locations in space. So it means that when I create my 3D. Object, my 3D. Shape, I will put it exactly in a region of the space that makes sense with respect to the top, right? So going to start the prediction that we have in here. We know the shape of the object great. We know that this object is going to be here at this excessive timeframe. So I will just put
04:00:120Alessandro Brighente: that shape in there. And now run my recasting algorithm. Right? So what does a recasting algorithm do? Well, basically, it computes the intersection points between your lidar and the object that is supposed to be there.
04:17:260Alessandro Brighente: Right again. We're sharing the point Cloud, so we don't need the shape of the object. We need to report the other vehicles where there's an intersection between my laser pulses and a physical object. Right? So that's why we need these. This simulation in here in order to to understand the location of these intersections between the laser pulses and the object.
04:41:580Alessandro Brighente: Good. So what the recasting does, and how is to ignore the the occlusional occlusion effects with the game of maximizing
04:51:960Alessandro Brighente: the the point density on the target objects. Right? So it means that phase are not locked
04:58:370Alessandro Brighente: and can directly go through the to the models of phases.
05:02:340Alessandro Brighente: and it leads to different intersection points. Right? So it's something like, I have an intersection point in here, one here, one here, one here to the lot.
05:10:540Alessandro Brighente: Okay? Because, again, we are not dealing with the real object. But we are dealing with the, with the simulation. So we need to to get rid of this intersection point, because otherwise these is going to be detected as an anomaly by the the receiving vehicle.
05:26:440Alessandro Brighente: Good. So what we do is what is called point sampling. Right? We want to get rid of this problem of occlusion, and we need to approximate all of these points that are very close in here with just a single point. Otherwise our point guard will not make sense, and usually what we do is that if we are performing our recasting from this point in here we assume our lighter.
05:51:130Alessandro Brighente: In this location we approximate all of these points with the points that is closer to the location of the light as expected.
05:58:920Alessandro Brighente: very likely resemble the the heating of a single surface. Right? If we do that for all the the points this points in here we will result with all the points that are actually will actually be on the surface. On the, on
06:16:90Alessandro Brighente: these mimics, a real life distribution of the objects in the world of the the point clouds that we will have on a real object.
06:29:120Alessandro Brighente: Good. So now, the other problem that we need to solve is related with the with the ray angles. Right? Because we we need to provide some information on the the actual physical, how we transmitted the ray and how it bounced back.
06:47:60Alessandro Brighente: Okay, so we need to to remove all the points with the same with the same angle, right? Because otherwise it would mean that through this angle I receive the multiple points. But that doesn't make sense with the with the physics of how the ladder work right. If I'm hitting a surface in here. Then it's not possible that I'm hitting a surface in here as well. The radi cannot go
07:09:500Alessandro Brighente: or pass through the surfaces right. So I need to remove all of the points that the mice have generated with the same in the same angle in here, right? And this guarantees that I.
07:20:110Alessandro Brighente: I create a point cloud that makes sense with the, with the physics
07:25:300Alessandro Brighente: of the of the of the ladder itself.
07:31:900Alessandro Brighente: Okay. So another point would be. How do I use the different maps that I get from the different vehicles to understand whether such an object has been spoofed or removed? Right? So I have something that it's not possible to detect. Thanks to the cargo algorithm that we said last time.
07:52:530Alessandro Brighente: right? Because you see that, thanks to these things that I mentioned before. Right? So the point sampling and removing these this reflection with the same angle, I'm not able to to compute the a good ratio or a known valid ratio for the 3 States, and I'm not able to leverage the laser pulse detection
08:15:10Alessandro Brighente: to see whether there could be an object makes sense or not right? Exactly exactly what we did in here to get of that possibility.
08:26:270Alessandro Brighente: Okay, so indeed, that's not something that we use but what we can use is to merge the different maps that we get from from the different vehicles. Right? Ideally, they should make sense with one another.
08:40:680Alessandro Brighente: And when we so we want to generate something that is called collaborative anomaly detection, right? So putting together the maps that we get from from the different, because we can get some insights on the point we have to generate, and whether they make sense or not. So of course, we have some challenges in doing this.
09:01:189Alessandro Brighente: So the 1st challenge is that we must be able to finish the perception cycle within a hard deadline. So what does that mean? It means that I need to be very efficient. And it means that they need to share only minimal data.
09:18:380Alessandro Brighente: Right? So I cannot send too much data, because otherwise I will slow down the perception or the the normal detection framework. So I cannot share. For instance, the whole point cloud. I need to share something smaller. And the other thing is, okay, I need to process this data. So how do I cannot have a centralized
09:39:340Alessandro Brighente: entity that is performing the anomaly detection. No, this is something that should be comparatively done by all the vehicles.
09:48:689Alessandro Brighente: Then second challenge is that, if I have attacks that are a launch over a smaller region they might be very difficult to identify. Right? If I'm spoofing an object that is
10:02:329Alessandro Brighente: located far away. My explanation says I would just have a small number of points in there, and it's confined in a small region so our only detection should have a grain that is fine enough in order to to be able to
10:16:880Alessandro Brighente: detect also this object. And the 3rd challenge is that we might have
10:25:292Alessandro Brighente: synchronization issue between the the map selected by the different ladders. Right? So they need to be aligned somehow.
10:30:930Alessandro Brighente: And the
10:32:440Alessandro Brighente: sometimes we
10:35:920Alessandro Brighente: simply have no means of having perfect alignment between one map and the other right? So if a vehicle is perceiving an object at a certain location, and the other vehicle is perceiving an object
10:48:920Alessandro Brighente: at the location, which is slightly different.
10:51:540Alessandro Brighente: then we should define how much slightly is enough to deem this maps as valid, and not deem one of these maps as containing a spoof object.
11:04:20Alessandro Brighente: Okay? But still we should allow these these more differences in in the location of the object. And this is something that happens also in the legitimate cases when the comparative autonomous vehicles are sharing these maps with the
11:18:320Alessandro Brighente: with one another.
11:22:570Alessandro Brighente: Okay. So yeah.
11:25:40Alessandro Brighente: let's.
11:26:580Alessandro Brighente: we'll see next week how this
11:32:860Alessandro Brighente: move this slides. Because.
11:34:420Alessandro Brighente: oh, okay, yeah, we'll do that.