Assignment1

Avinash Prem Kumar Koyya , Y9156

A.
7,1 3,5
		
		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.2 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		[0.0] 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		D 0.0
		R 0.0
		L 0.0
		U 0.0
		R 0.0

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.2 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 [0.0] 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		R 0.0
		L 0.0
		U 0.0
		D 0.0
		D 0.0

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.2 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.5 0.1 
		0.0 [0.0] 0.0 0.2 0.0 

		U 0.0
		R 0.0
		L 0.0
		R 0.0
		R 0.0

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.2 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 [0.5] 0.1 
		0.0 0.0 0.0 0.2 0.0 

		S 0.5
		L 0.5
		U 0.5
		L 0.5
		U 0.5

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.2 
		0.3 0.1 0.7 0.8 0.2 
		0.0 [0.0] 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.0 0.1 
		0.0 0.0 0.0 0.2 0.0 

		U 0.5
		S 0.6
		R 0.6
		S 1.3
		U 1.3

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 [0.4] 0.3 0.2 
		0.3 0.0 0.0 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.0 0.1 
		0.0 0.0 0.0 0.2 0.0 

		S 1.7
		R 1.7
		S 2.0
		D 2.0
		S 2.8
		

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 [0.2] 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		S 0.2
		D 0.2
		S 0.4
		D 0.4
		S 0.7

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.0 
		0.3 0.1 0.7 0.8 0.0 
		0.0 0.0 0.2 0.8 [0.0] 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		U 0.7
		D 0.7
		L 0.7
		S 1.5
		R 1.5

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.0 
		0.3 0.1 0.7 0.8 0.0 
		0.0 0.0 0.2 0.0 [0.0] 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		D 1.5
		S 1.6
		U 1.6
		U 1.6
		D 1.6

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.0 
		0.3 0.1 0.7 0.8 0.0 
		0.0 0.0 0.2 0.0 [0.0] 
		0.0 0.0 0.0 0.5 0.0 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		L 1.6
		R 1.6
		L 1.6
		R 1.6
		L 1.6

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.0 
		0.3 0.1 0.7 0.8 0.0 
		0.0 0.0 0.2 [0.0] 0.0 
		0.0 0.0 0.0 0.5 0.0 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		L 1.6
		S 1.8
		U 1.8
		S 2.5
		L 2.5

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.0 
		0.3 [0.1] 0.0 0.8 0.0 
		0.0 0.0 0.0 0.0 0.0 
		0.0 0.0 0.0 0.5 0.0 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		S 2.6
		U 2.6
		S 3.1
		U 3.1
		R 3.1

B.
7,1 3,5

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.2 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		[0.0] 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		D 0.0
		R 0.0
		L 0.0
		U 0.0
		R 0.0

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.2 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 [0.0] 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		R 0.0
		R 0.0
		S 0.5
		U 0.5
		S 1.0

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.2 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 [0.0] 0.1 
		0.0 0.0 0.0 0.0 0.1 
		0.0 0.0 0.0 0.2 0.0 

		U 1.0
		S 1.8
		U 1.8
		S 2.6
		L 2.6

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 0.2 
		0.3 0.1 [0.7] 0.0 0.2 
		0.0 0.0 0.2 0.0 0.3 
		0.0 0.0 0.0 0.0 0.1 
		0.0 0.0 0.0 0.0 0.1 
		0.0 0.0 0.0 0.2 0.0 

		S 3.3
		U 3.3
		S 3.7
		R 3.7
		S 4.0

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.0 [0.0] 0.2 
		0.3 0.1 0.0 0.0 0.2 
		0.0 0.0 0.2 0.0 0.3 
		0.0 0.0 0.0 0.0 0.1 
		0.0 0.0 0.0 0.0 0.1 
		0.0 0.0 0.0 0.2 0.0 

		U 4.0
		S 4.5
		R 4.5
		S 5.0
		D 5.0

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.0 0.0 
		0.3 0.5 0.0 0.0 [0.2] 
		0.3 0.1 0.0 0.0 0.2 
		0.0 0.0 0.2 0.0 0.3 
		0.0 0.0 0.0 0.0 0.1 
		0.0 0.0 0.0 0.0 0.1 
		0.0 0.0 0.0 0.2 0.0 

		S 5.2
		D 5.2
		S 5.4
		D 5.4
		S 5.7


		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 0.5 0.5 
		0.3 0.5 0.4 0.3 [0.2] 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		S 0.2
		U 0.2
		S 0.7
		L 0.7
		S 1.2

		0.3 0.5 0.8 0.1 0.1 
		0.1 0.0 0.5 [0.0] 0.0 
		0.3 0.5 0.4 0.3 0.0 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		L 1.2
		S 1.7
		U 1.7
		S 2.5
		L 2.5

		0.3 [0.5] 0.0 0.1 0.1 
		0.1 0.0 0.0 0.0 0.0 
		0.3 0.5 0.4 0.3 0.0 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		S 3.0
		L 3.0
		S 3.3
		D 3.3
		S 3.4

		0.0 0.0 0.0 0.1 0.1 
		[0.0] 0.0 0.0 0.0 0.0 
		0.3 0.5 0.4 0.3 0.0 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		D 3.4
		S 3.7
		R 3.7
		S 4.2
		R 4.2

		0.0 0.0 0.0 0.1 0.1 
		0.0 0.0 0.0 0.0 0.0 
		0.0 0.0 [0.4] 0.3 0.0 
		0.3 0.1 0.7 0.8 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		S 4.6
		D 4.6
		S 5.3
		R 5.3
		S 6.1

		0.0 0.0 0.0 0.1 0.1 
		0.0 0.0 0.0 0.0 0.0 
		0.0 0.0 0.0 0.3 0.0 
		0.3 0.1 0.0 [0.0] 0.2 
		0.0 0.0 0.2 0.8 0.3 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.5 0.1 
		0.0 0.0 0.0 0.2 0.0 

		D 6.1
		S 6.9
		D 6.9
		S 7.4
		D 7.4

code.zip

D.
a. An agent is perfectly rational if it takes the 'right' decision, given what it knows. So, it will have to make its decisions based on logical deductions, to best serve the purpose. The simple reflex agent considered, does not have knowledge about the state of its surrounding grids. Thus, it makes such decisions on two instances
-when the current grid contains dirt; it removes it.
-when its clean, it moves to another grid.
Hence, the agent would behave perfectly rational, when the next grid it moves leaves it without choice, i.e., it has atmost one neighbour, a total of atmost two grids.

b. The greedy robot in B, is deterministic as long as the quantity of dirt amongst its neighbours is different. It turns non-deterministic when more than one neighbours posess the highest value among them.

c. In an environment where the distribution of dirt in the arena is weigted heavily towards one corner and the agent starts at the diagonally opposite corner, it is very unlikely for it to move to the high density region so soon (assuming the probabitly of the agent to move in either of the four directions is equal). The following arena is such an example when the agent is initialised from 1,1.

								0.0	0.0	0.0	0.0	0.0
								0.0	0.0	0.0	0.0	0.1
								0.0	0.0	0.0	0.3	0.4
								0.0	0.0	0.0	0.4	0.5
								0.0	0.0	0.3	0.6	0.7
								0.0	0.0	0.4	0.8	0.9
				

d. The rational agent would use the data collected over time - of the difference between the estimated(sensed) dirt and the actual dirt it has found at that place to measure the average error in measurement. It will have to consider the error in measurement of dirt while making its choice when it has to choose which grid to move to (as in the greedy approach).

e. Since the amount of dirt updated on the grid is again a random number, the rational agent would modify the dirt-value of the grid every 20 steps from when it had cleaned it and would use the new value(unkown) for further calcualtions.

E.
image

a. The location of the floor and the walls can be made by prior knowledge of the difference in their colour. The difference in the colours at the borders helps to locate the continuous edge between the two planes on processing the image. The lower side of the edge is taken to be the floor, owing to the orientation of the camera on the agent.
[The image above shows three distinct and parallel edges which is due to the padding on the lower wall. The lower most edge is taken to be the corner edge between the floor and wall by the agent, on prior knowledge.]

image

b. Given that the agent is informed to detect the borders of the grid on the arena, it can process the image to locate the dirt and thus map it to the grid it is wholly in (or distribute it, if it lies on the border). As shown in the image above, the visibility of the grid borders helps it determine the mapping of dirt and also its own location.

c. The value between 0 and 1 would not be sufficient to notify dirt of different types. For example, the types of dirt that can be removed by vaccum sucking cannot be quantified relative to those which require to be mopped. Thus, such cases may ask for a dirt vector.

d. The robot may locate itself by locating on the image, the borders of the grid around it and thus the grid in which itself is. The robot would know the height of the camera from the floor and the distance of the camera from itself(if its projected outward), by which it can calculate its own location from that of the camera.