Computer vision systems rely on image sensors which detect electromagnetic radiation which is typically in the form of either visible light or infra-red light . The sensors are designed using solid-state physics . The process by which light propagates and reflects off surfaces is explained using optics . Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Robots can also be equipped with multiple vision sensors to be better able to compute the sense of depth in the environment. Like human eyes, robots' "eyes" must also be able to focus on a particular area of interest, and also adjust to variations in light intensities.