This class represents a camera composed of a body (used only for display), a sensor and a lens, therefore it is an assembly.
The main functions of this class are driving the sensor and the lens together, so that the user can call more logical functions (such as initialize) instead of using a complicated series of internal functions.
The camera creates a mapping of world coordinates into sensor coordinates by using direct linear transformations (a pinhole model). Lens imperfections and ambient influence can be modeled by using several DLTs (which is controlled by the parameter mappingResolution).
The reference points and origin of used in the development of this class are shown in the figure below:
The following points are noteworthy:
\(z\) points to the right.
The origin of the camera is the center of its flange (the connection with its lens)
The sensor is positioned at a negative \(x\) position
Methods
This signals the ray tracing implementation that no attempt should be made to intersect rays with the camera
Allowable circle of confusion diameter used for depth of field calculation, standard value - \(29 \cdot 10^{-6}m\)
Camera body color (for plotting only)
Determinant of the mapping matrices (used to verify if the \((x,y,z)-(u,v)\) mapping mapping goes from a right handed coordinate system to another right handed, or there is a flip
Camera body dimensions (for plotting only)
Derivative of the \((x,y,z)-(u,v)\) mapping
This function calculates the camera field of view and its mapping functions to relate world coordinates to sensor coordinates. This should be called whenever the ambient is set up, so the camera can be used for synthetic image generation and displaying.
The procedure is capable of estimating the focusing region by using a hybrid ray tracing procedure. This is needed because the conjugation equation ( \(f^{-1} = p^{-1} + {p^{\prime}}^{-1}\) ) is not capable of representing the cases where there is a refraction between the measurement area and the camera.
The approach consists then on taking the theoretical focusing points (as calulated by the conjugation equation) and using them to calculate initial vectors (departing from the exit pupil) for a ray tracing. The focusing point is then the intersection between the rays. The procedure is shown in the figure below:
The program implements 4 rays per point (departing from directions \(y,z,-y,-z\) in the camera reference). This is sufficient for most cases, and is just an approximation in the case the refracting surface is not perpendicular to the camera axis (except the case when it’s inclined in the cameras \(y\) or \(z\) axes only), because in this case astigmatism is present.
The line intersection is performed between the rays starting from \(y,-y\) and \(z,-z\) separately, which generate the points denominated VV anh VH. When these points do not coincide, the system is astigmatic.
This approach is in fact adapted to consider the depth of field. The extension is natural and involves calculating the theoretical forward and aft tolerances for focusing (this is a funcion of the parameter circleOfConfusionDiameter). The procedure is shown in the figure below:
For each subfield (which number is defined by the mappingResolution property) 8 points are calculated. These points are then used to derive a direct linear transformation (DLT) matrix relating the world coodinates to the sensor coordinates.
Camera lens
This method determines the position that a set of points \(x,y,z\) map on the camera sensor.
In order to optimize calculation speed, a lot of memory is used, but the method seems to run smoothly up to \(2\cdot10^6\) points in a \(5 \times 5\) mapping (2GB of available RAM)
The number of elements in the bottleneck matrix is:
Parameters : | pts : numpy.ndarray \((N,3)\)
skipPupilAngle : boolean
|
---|---|
Returns : | uv : numpy.ndarray \((N,3)\)
w : numpy.ndarray \((N)\)
dudv : numpynd.array \((N,6)\)
lineOfSight : numpy.ndarray \((N,3)\)
imdim : numpy.ndarray \((N)\)
pupilSolidAngle : numpy.ndarray \((N,3)\)
|
2D vector of 4x3 matrices that perform the \((x,y,z)-(u,v)\) mapping
Number of rays to be cast in \(y\) and \(z\) directions for creation of the \((x,y,z)-(u,v)\) mapping
For each mapping subregion, this stores their center in world coordinates
Wavelength used for creation of the mapping
Camera sensor
Distance between camera flange and sensor (+ to front)
For each mapping subregion, this stores their center in the sensor plane
This is a convenience function to set the Scheimpflug angle as in a well-build adapter (which means that the pivoting is performed through the sensor center).
Parameters : | angle : float (radians)
axis : numpy.ndarray \((3)\)
|
---|
If there are optical elements between mapping region and camera, a “virtual aperture” area is calculated for light intensity measurements
Returns an assembly composed of cameras at the position and orientation defined by the original camera mapping. E.g. if a camera is looking through a mirror, the virtualCamera will be the mirror image of the camera.
Parameters : | centeronly : boolean
|
---|---|
Returns : | virtualCameras : pyvsim.Assembly
|