Camera

Introduction

vtkCamera is a virtual camera for 3D rendering. It provides methods
to position and orient the view point and focal point. Convenience
methods for moving about the focal point also are provided. More
complex methods allow the manipulation of the computer graphics
model including view up vector, clipping planes, and
camera perspective.

newInstance()

Construct camera instance with its focal point at the origin, and position=(0,0,1). The
view up is along the y-axis, view angle is 30 degrees, and the clipping range is (.1,1000).

setPosition(x, y, z), getPosition()

Set/Get the position of the camera in world coordinates. The default position is (0,0,1).

setFocalPoint(x, y, z), getFocalPoint()

Set/Get the focal of the camera in world coordinates. The default focal point is the origin.

setViewUp(x, y, z), getViewUp()

Set/Get the view up direction for the camera. The default is (0,1,0).

orthogonalizeViewUp()

Recompute the ViewUp vector to force it to be perpendicular to camera->focalpoint vector.
Unless you are going to use Yaw or Azimuth on the camera, there is no need to do this.

setDistance(dist), getDistance()

Set: Move the focal point so that it is the specified distance from the camera position, along the view plane normal.
This distance must be positive.
Get: Returns the distance from the camera position to the focal point. This distance is positive.

setDirectionOfProjection(x, y, z)

Recalculates the focalPoint position to be the same distance from the camera as before, but along the new projection vector.

getDirectionOfProjection()

Get the vector in the direction from the camera position to the focal point. This is usually
the opposite of the ViewPlaneNormal, the vector perpendicular to the screen, unless the view
is oblique.

dolly(value)

Divide the camera’s distance from the focal point by the given dolly value. Use a value
greater than one to dolly-in toward the focal point, and use a value less than one to dolly-out
away from the focal point.

roll(degrees)

Rotate the camera about the direction of projection. This will spin the camera about its view axis.

azimuth(degrees)

Rotate the camera about the view up vector centered at the focal point. Note that the view up
vector is whatever was set via SetViewUp, and is not necessarily perpendicular to the direction
of projection. The result is a horizontal rotation of the camera.

yaw(degrees)

Rotate the focal point about the view up vector, using the camera’s position as the center of
rotation. Note that the view up vector is whatever was set via SetViewUp, and is not necessarily
perpendicular to the direction of projection. The result is a horizontal rotation of the scene.

elevation(degrees)

Rotate the camera about the cross product of the negative of the direction of projection and the
view up vector, using the focal point as the center of rotation. The result is a vertical rotation
of the scene.

pitch(degrees)

Rotate the focal point about the cross product of the view up vector and the direction of projection,
using the camera’s position as the center of rotation. The result is a vertical rotation of the camera.

zoom(factor)

In perspective mode, decrease the view angle by the specified factor. In parallel mode, decrease
the parallel scale by the specified factor. A value greater than 1 is a zoom-in, a value less than
1 is a zoom-out.

setParallelProjection(boolean), getParallelProjection()

Set/Get the value of the ParallelProjection instance variable. This determines if the camera should do
a perspective or parallel projection.

setUseHorizontalViewAngle(degrees), getUseHorizontalViewAngle()

Set/Get the value of the UseHorizontalViewAngle instance variable. If set, the camera’s view angle
represents a horizontal view angle, rather than the default vertical view angle. This is useful if
the application uses a display device which whose specs indicate a particular horizontal view angle,
or if the application varies the window height but wants to keep the perspective transform unchanges.

setViewAngle(degrees), getViewAngle()

Set/Get the camera view angle, which is the angular height of the camera view measured in degrees.
The default angle is 30 degrees. This method has no effect in parallel projection mode. The formula
for setting the angle up for perfect perspective viewing is: angle = 2*atan((h/2)/d) where h is the
height of the RenderWindow (measured by holding a ruler up to your screen) and d is the distance
from your eyes to the screen.

setParallelScale(scale), getParallelScale()

Set/Get the scaling used for a parallel projection, i.e. the height of the viewport in
world-coordinate distances. The default is 1. Note that the “scale” parameter works as an “inverse
scale” — larger numbers produce smaller images. This method has no effect in perspective projection
mode.

setClippingRange(near, far), getClippingRange()

Set/Get the location of the near and far clipping planes along the direction of projection. Both
of these values must be positive. How the clipping planes are set can have a large impact on how
well Z-buffering works. In particular the front clipping plane can make a very big difference.
Setting it to 0.01 when it really could be 1.0 can have a big impact on your Z-buffer resolution
farther away. The default clipping range is (0.1,1000). Clipping distance is measured in world
coordinates.

setWindowCenter(x,y), getWindowCenter()

Set/Get the center of the window in viewport coordinates. The viewport coordinate range is
([-1,+1],[-1,+1]). This method is for if you have one window which consists of several viewports,
or if you have several screens which you want to act together as one large screen.

getViewPlaneNormal()

Get the viewPlaneNormal [x,y,z] array. This vector will point opposite to the direction of projection, unless
you have created a sheared output view using SetViewShear/SetObliqueAngles (not implemented).

Note: to set the viewPlaneNormal, use setDirectionOfProjection()

SetUseOffAxisProjection(boolean), getUseOffAxisProjection()

Set/Get use offaxis frustum. OffAxis frustum is used for off-axis frustum calculations in getProjectionMatrix(),
specificly for stereo rendering. For reference see “High Resolution Virtual Reality”, in Proc.
SIGGRAPH ‘92, Computer Graphics, pages 195-202, 1992.

Note: offAxis projection is NOT IMPLEMENTED.

setScreenBottomLeft(x, y, z), getScreenBottomLeft()

Set/Get top left corner point of the screen. This will be used only for offaxis frustum
calculation. Default is (-0.5, -0.5, -0.5). Can set individual x,y,z, or provide an array [x,y,z]. Returns an array.

setScreenBottomRight(x, y, z), getScreenBottomRight()

Set/Get bottom left corner point of the screen. This will be used only for offaxis frustum
calculation. Default is (0.5, -0.5, -0.5). Can set individual x,y,z, or provide an array [x,y,z]. Returns an array.

setScreenTopRight(x, y, z), getScreenTopRight()

Set/Get top right corner point of the screen. This will be used only for offaxis frustum
calculation. Default is (0.5, 0.5, -0.5). Can set individual x,y,z, or provide an array [x,y,z]. Returns an array.

setViewMatrix(mat4)

Manually set the view matrix. Ignores the camera position, focal point, and view up, until set to a falsy value like null or undefined.

getViewMatrix()

Return the matrix of the view transform. If the viewMatrix was not manually set with setViewMatrix(),
the matrix is computed from the Position, the FocalPoint, and the ViewUp vectors.

setProjectionMatrix(mat4)

Manually set the projection transform matrix. Ignores the camera position, focal point, and view up, until set to a falsy value like null or undefined.

getProjectionMatrix(aspect, nearz, farz)

Return the projection transform matrix, which converts from camera coordinates to viewport
coordinates. The ‘aspect’ is the width/height for the viewport, and the nearz and farz are
the Z-buffer values that map to the near and far clipping planes. The viewport coordinates of
a point located inside the frustum are in the range ([-1,+1],[-1,+1],[nearz,farz]).

getCompositeProjectionMatrix(aspect, nearz, farz)

Return the concatenation of the ViewTransform and the ProjectionTransform. This transform
will convert world coordinates to viewport coordinates. The ‘aspect’ is the width/height
for the viewport, and the nearz and farz are the Z-buffer values that map to the near and
far clipping planes. The viewport coordinates of a point located inside the frustum are
in the range ([-1,+1],[-1,+1],[nearz,farz]).

setFreezeFocalPoint(boolean), getFreezeFocalPoint()

Set/Get the value of the FreezeDolly instance variable. This determines if the camera should move the focal
point with the camera position. Only used by the MouseCameraTrackballZoomManipulator, or can be referenced in
any manipulator you choose to build.

setOrientationWXYZ(degrees, x,y,z)

Move the focalPoint and viewUp by rotating the camera degrees about the [x,y,z] vector using Quaternion math, maintaining the focalPoint distance.

applyTransform(transformMat4)

Apply a transform to the camera. The camera position, focal-point, and view-up are all re-calculated
using the 4x4 transform matrix.

setThickness(thickness), getThickness()

Set/Get the distance between clipping planes. This method adjusts the far clipping plane to be
set a distance ‘thickness’ beyond the near clipping plane.

setThicknessFromFocalPoint()

Set the distance between clipping planes, adjusting both near and far centered on the focalpoint of the camera.

Unimplemented methods

getRoll(), setRoll(roll)

NOT IMPLEMENTED. Get/Set the roll angle of the camera about the direction of projection.

setObliqueAngles(alpha, beta)

NOT IMPLEMENTED. Set the oblique viewing angles. The first angle, alpha, is the angle (measured from the horizontal)
that rays along the direction of projection will follow once projected onto the 2D screen. The
second angle, beta, is the angle between the view plane and the direction of projection. This
creates a shear transform x’ = x + dzcos(alpha)/tan(beta), y’ = dzsin(alpha)/tan(beta) where
dz is the distance of the point from the focal plane. The angles are (45,90) by default. Oblique
projections commonly use (30,63.435).

getProjectionMatrix(renderer)

NOT IMPLEMENTED. Given a vtkRenderer, return the projection transform matrix, which converts from camera
coordinates to viewport coordinates. This method computes the aspect, nearz and farz,
then calls the more specific signature of GetCompositeProjectionTransformMatrix.

getFrustumPlanes(aspect, planes)

NOT IMPLEMENTED. Get the plane equations that bound the view frustum. The plane normals point inward.
The planes array contains six plane equations of the form (Ax+By+Cz+D=0), the first
four values are (A,B,C,D) which repeats for each of the planes. The planes are given
in the following order: -x,+x,-y,+y,-z,+z. Warning: it means left,right,bottom,top,far,near
(NOT near,far) The aspect of the viewport is needed to correctly compute the planes

getOrientation()

NOT IMPLEMENTED. Get the orientation of the camera (x, y, z orientation angles from the transformation matrix).

getOrientationWXYZ()

NOT IMPLEMENTED. Get the wxyz angle+axis representing the current orientation. The angle is in degrees and
the axis is a unit vector.

getCameraLightTransformMatrix()

NOT IMPLEMENTED. Returns a transformation matrix for a coordinate frame attached to the camera, where the
camera is located at (0, 0, 1) looking at the focal point at (0, 0, 0), with up being (0, 1, 0).

deepCopy(sourceCamera)

NOT IMPLEMENTED. Copy the properties of source into this. Copy the contents of the matrices. Do not pass
null source camera or this camera.

Inherited from macro.obj

modified(mtime?)

Set the camera as modified, update the internal modified time, and notify all callbacks.
If a modifiedTime mtime value is provided but older than the internal time, or the camera is marked as deleted, does nothing.

getMtime()

returns the last modified time.

onModified(callback)

registers the callback, and returns a function to unregister and stop listening.

shallowCopy(sourceCamera)

Copy the properties of sourceCamera into this. Copy pointers of matrices. Do not pass
null source camera or this camera.

render(renderer)

This method causes the camera to set up whatever is required for viewing the scene. This
is actually handled by a subclass of vtkCamera, which is created through New()

Source

index.js
import { quat, vec3, vec4, mat4 } from 'gl-matrix';

import macro from 'vtk.js/Sources/macro';
import * as vtkMath from 'vtk.js/Sources/Common/Core/Math';

const { vtkDebugMacro } = macro;

/* eslint-disable new-cap */

/*
* Convenience function to access elements of a gl-matrix. If it turns
* out I have rows and columns swapped everywhere, then I'll just change
* the order of 'row' and 'col' parameters in this function
*/
// function getMatrixElement(matrix, row, col) {
// const idx = (row * 4) + col;
// return matrix[idx];
// }

// ----------------------------------------------------------------------------
// vtkCamera methods
// ----------------------------------------------------------------------------

function vtkCamera(publicAPI, model) {
// Set our className
model.classHierarchy.push('vtkCamera');

// Set up private variables and methods
const origin = vec3.create();
const dopbasis = vec3.fromValues(0.0, 0.0, -1.0);
const upbasis = vec3.fromValues(0.0, 1.0, 0.0);
const tmpMatrix = mat4.create();
const tmpvec1 = vec3.create();
const tmpvec2 = vec3.create();
const tmpvec3 = vec3.create();

const rotateMatrix = mat4.create();
const trans = mat4.create();
const newPosition = vec3.create();
const newFocalPoint = vec3.create();

// Internal Functions that don't need to be public
function computeViewPlaneNormal() {
// VPN is -DOP
model.viewPlaneNormal[0] = -model.directionOfProjection[0];
model.viewPlaneNormal[1] = -model.directionOfProjection[1];
model.viewPlaneNormal[2] = -model.directionOfProjection[2];
}

publicAPI.orthogonalizeViewUp = () => {
const vt = publicAPI.getViewMatrix();
model.viewUp[0] = vt[4];
model.viewUp[1] = vt[5];
model.viewUp[2] = vt[6];

publicAPI.modified();
};

publicAPI.setPosition = (x, y, z) => {
if (
x === model.position[0] &&
y === model.position[1] &&
z === model.position[2]
) {
return;
}

model.position[0] = x;
model.position[1] = y;
model.position[2] = z;

// recompute the focal distance
publicAPI.computeDistance();

publicAPI.modified();
};

publicAPI.setFocalPoint = (x, y, z) => {
if (
x === model.focalPoint[0] &&
y === model.focalPoint[1] &&
z === model.focalPoint[2]
) {
return;
}

model.focalPoint[0] = x;
model.focalPoint[1] = y;
model.focalPoint[2] = z;

// recompute the focal distance
publicAPI.computeDistance();

publicAPI.modified();
};

publicAPI.setDistance = (d) => {
if (model.distance === d) {
return;
}

model.distance = d;

if (model.distance < 1e-20) {
model.distance = 1e-20;
vtkDebugMacro('Distance is set to minimum.');
}

// we want to keep the camera pointing in the same direction
const vec = model.directionOfProjection;

// recalculate FocalPoint
model.focalPoint[0] = model.position[0] + vec[0] * model.distance;
model.focalPoint[1] = model.position[1] + vec[1] * model.distance;
model.focalPoint[2] = model.position[2] + vec[2] * model.distance;

publicAPI.modified();
};

//----------------------------------------------------------------------------
// This method must be called when the focal point or camera position changes
publicAPI.computeDistance = () => {
const dx = model.focalPoint[0] - model.position[0];
const dy = model.focalPoint[1] - model.position[1];
const dz = model.focalPoint[2] - model.position[2];

model.distance = Math.sqrt(dx * dx + dy * dy + dz * dz);

if (model.distance < 1e-20) {
model.distance = 1e-20;
vtkDebugMacro('Distance is set to minimum.');

const vec = model.directionOfProjection;

// recalculate FocalPoint
model.focalPoint[0] = model.position[0] + vec[0] * model.distance;
model.focalPoint[1] = model.position[1] + vec[1] * model.distance;
model.focalPoint[2] = model.position[2] + vec[2] * model.distance;
}

model.directionOfProjection[0] = dx / model.distance;
model.directionOfProjection[1] = dy / model.distance;
model.directionOfProjection[2] = dz / model.distance;

computeViewPlaneNormal();
};

//----------------------------------------------------------------------------
// Move the position of the camera along the view plane normal. Moving
// towards the focal point (e.g., > 1) is a dolly-in, moving away
// from the focal point (e.g., < 1) is a dolly-out.
publicAPI.dolly = (amount) => {
if (amount <= 0.0) {
return;
}

// dolly moves the camera towards the focus
const d = model.distance / amount;

publicAPI.setPosition(
model.focalPoint[0] - d * model.directionOfProjection[0],
model.focalPoint[1] - d * model.directionOfProjection[1],
model.focalPoint[2] - d * model.directionOfProjection[2]
);
};

publicAPI.roll = (angle) => {
const eye = model.position;
const at = model.focalPoint;
const up = model.viewUp;
const viewUpVec4 = vec4.fromValues(up[0], up[1], up[2], 0.0);

mat4.identity(rotateMatrix);
const viewDir = vec3.fromValues(
at[0] - eye[0],
at[1] - eye[1],
at[2] - eye[2]
);
mat4.rotate(
rotateMatrix,
rotateMatrix,
vtkMath.radiansFromDegrees(angle),
viewDir
);
vec4.transformMat4(viewUpVec4, viewUpVec4, rotateMatrix);

model.viewUp[0] = viewUpVec4[0];
model.viewUp[1] = viewUpVec4[1];
model.viewUp[2] = viewUpVec4[2];

publicAPI.modified();
};

publicAPI.azimuth = (angle) => {
const fp = model.focalPoint;

mat4.identity(trans);

// translate the focal point to the origin,
// rotate about view up,
// translate back again
mat4.translate(trans, trans, vec3.fromValues(fp[0], fp[1], fp[2]));
mat4.rotate(
trans,
trans,
vtkMath.radiansFromDegrees(angle),
vec3.fromValues(model.viewUp[0], model.viewUp[1], model.viewUp[2])
);
mat4.translate(trans, trans, vec3.fromValues(-fp[0], -fp[1], -fp[2]));

// apply the transform to the position
vec3.transformMat4(
newPosition,
vec3.fromValues(model.position[0], model.position[1], model.position[2]),
trans
);
publicAPI.setPosition(newPosition[0], newPosition[1], newPosition[2]);
};

publicAPI.yaw = (angle) => {
const position = model.position;

mat4.identity(trans);

// translate the camera to the origin,
// rotate about axis,
// translate back again
mat4.translate(
trans,
trans,
vec3.fromValues(position[0], position[1], position[2])
);
mat4.rotate(
trans,
trans,
vtkMath.radiansFromDegrees(angle),
vec3.fromValues(model.viewUp[0], model.viewUp[1], model.viewUp[2])
);
mat4.translate(
trans,
trans,
vec3.fromValues(-position[0], -position[1], -position[2])
);

// apply the transform to the position
vec3.transformMat4(
newFocalPoint,
vec3.fromValues(
model.focalPoint[0],
model.focalPoint[1],
model.focalPoint[2]
),
trans
);
publicAPI.setFocalPoint(
newFocalPoint[0],
newFocalPoint[1],
newFocalPoint[2]
);
};

publicAPI.elevation = (angle) => {
const fp = model.focalPoint;

// get the eye / camera position from the viewMatrix
const vt = publicAPI.getViewMatrix();
const axis = [-vt[0], -vt[1], -vt[2]];

mat4.identity(trans);

// translate the focal point to the origin,
// rotate about view up,
// translate back again
mat4.translate(trans, trans, vec3.fromValues(fp[0], fp[1], fp[2]));
mat4.rotate(
trans,
trans,
vtkMath.radiansFromDegrees(angle),
vec3.fromValues(axis[0], axis[1], axis[2])
);
mat4.translate(trans, trans, vec3.fromValues(-fp[0], -fp[1], -fp[2]));

// apply the transform to the position
vec3.transformMat4(
newPosition,
vec3.fromValues(model.position[0], model.position[1], model.position[2]),
trans
);
publicAPI.setPosition(newPosition[0], newPosition[1], newPosition[2]);
};

publicAPI.pitch = (angle) => {
const position = model.position;

const vt = publicAPI.getViewMatrix();
const axis = [vt[0], vt[1], vt[2]];

mat4.identity(trans);

// translate the camera to the origin,
// rotate about axis,
// translate back again
mat4.translate(
trans,
trans,
vec3.fromValues(position[0], position[1], position[2])
);
mat4.rotate(
trans,
trans,
vtkMath.radiansFromDegrees(angle),
vec3.fromValues(axis[0], axis[1], axis[2])
);
mat4.translate(
trans,
trans,
vec3.fromValues(-position[0], -position[1], -position[2])
);

// apply the transform to the focal point
vec3.transformMat4(
newFocalPoint,
vec3.fromValues(...model.focalPoint),
trans
);
publicAPI.setFocalPoint(...newFocalPoint);
};

publicAPI.zoom = (factor) => {
if (factor <= 0) {
return;
}
if (model.parallelProjection) {
model.parallelScale /= factor;
} else {
model.viewAngle /= factor;
}
publicAPI.modified();
};

publicAPI.applyTransform = (transformMat4) => {
const vuOld = [...model.viewUp, 1.0];
const posNew = [];
const fpNew = [];
const vuNew = [];

vuOld[0] += model.position[0];
vuOld[1] += model.position[1];
vuOld[2] += model.position[2];

vec4.transformMat4(posNew, [...model.position, 1.0], transformMat4);
vec4.transformMat4(fpNew, [...model.focalPoint, 1.0], transformMat4);
vec4.transformMat4(vuNew, vuOld, transformMat4);

vuNew[0] -= posNew[0];
vuNew[1] -= posNew[1];
vuNew[2] -= posNew[2];

publicAPI.setPosition(...posNew.slice(0, 3));
publicAPI.setFocalPoint(...fpNew.slice(0, 3));
publicAPI.setViewUp(...vuNew.slice(0, 3));
};

publicAPI.getThickness = () =>
model.clippingRange[1] - model.clippingRange[0];

publicAPI.setThickness = (thickness) => {
let t = thickness;
if (t < 1e-20) {
t = 1e-20;
vtkDebugMacro('Thickness is set to minimum.');
}
publicAPI.setClippingRange(
model.clippingRange[0],
model.clippingRange[0] + t
);
};

publicAPI.setThicknessFromFocalPoint = (thickness) => {
let t = thickness;
if (t < 1e-20) {
t = 1e-20;
vtkDebugMacro('Thickness is set to minimum.');
}
publicAPI.setClippingRange(model.distance - t / 2, model.distance + t / 2);
};

// Unimplemented functions
publicAPI.setRoll = (angle) => {}; // dependency on GetOrientation() and a model.ViewTransform object, see https://github.com/Kitware/VTK/blob/master/Common/Transforms/vtkTransform.cxx and https://vtk.org/doc/nightly/html/classvtkTransform.html
publicAPI.getRoll = () => {};
publicAPI.setObliqueAngles = (alpha, beta) => {};
publicAPI.getOrientation = () => {};
publicAPI.getOrientationWXYZ = () => {};
publicAPI.getFrustumPlanes = (aspect) => {
// Return array of 24 params (4 params for each of 6 plane equations)
};
publicAPI.getCameraLightTransformMatrix = () => {};
publicAPI.deepCopy = (sourceCamera) => {};

publicAPI.physicalOrientationToWorldDirection = (ori) => {
// push the x axis through the orientation quat
const oriq = quat.fromValues(ori[0], ori[1], ori[2], ori[3]);
const coriq = quat.create();
const qdir = quat.fromValues(0.0, 0.0, 1.0, 0.0);
quat.conjugate(coriq, oriq);

// rotate the z axis by the quat
quat.multiply(qdir, oriq, qdir);
quat.multiply(qdir, qdir, coriq);

// return the z axis in world coords
return [qdir[0], qdir[1], qdir[2]];
};

publicAPI.getPhysicalToWorldMatrix = (result) => {
publicAPI.getWorldToPhysicalMatrix(result);
mat4.invert(result, result);
};

publicAPI.getWorldToPhysicalMatrix = (result) => {
mat4.identity(result);

// now the physical to vtk world rotation tform
const physVRight = [3];
vtkMath.cross(model.physicalViewNorth, model.physicalViewUp, physVRight);
result[0] = physVRight[0];
result[1] = physVRight[1];
result[2] = physVRight[2];
result[4] = model.physicalViewUp[0];
result[5] = model.physicalViewUp[1];
result[6] = model.physicalViewUp[2];
result[8] = -model.physicalViewNorth[0];
result[9] = -model.physicalViewNorth[1];
result[10] = -model.physicalViewNorth[2];
mat4.transpose(result, result);

vec3.set(
tmpvec1,
1 / model.physicalScale,
1 / model.physicalScale,
1 / model.physicalScale
);

mat4.scale(result, result, tmpvec1);
mat4.translate(result, result, model.physicalTranslation);
};

publicAPI.computeViewParametersFromViewMatrix = (vmat) => {
// invert to get view to world
mat4.invert(tmpMatrix, vmat);

// note with glmatrix operations happen in
// the reverse order
// mat.scale
// mat.translate
// will result in the translation then the scale
// mat.mult(a,b)
// results in perform the B transformation then A

// then extract the params position, orientation
// push 0,0,0 through to get a translation
vec3.transformMat4(tmpvec1, origin, tmpMatrix);
publicAPI.computeDistance();
const oldDist = model.distance;
publicAPI.setPosition(tmpvec1[0], tmpvec1[1], tmpvec1[2]);

// push basis vectors to get orientation
vec3.transformMat4(tmpvec2, dopbasis, tmpMatrix);
vec3.subtract(tmpvec2, tmpvec2, tmpvec1);
vec3.normalize(tmpvec2, tmpvec2);
publicAPI.setDirectionOfProjection(tmpvec2[0], tmpvec2[1], tmpvec2[2]);

vec3.transformMat4(tmpvec3, upbasis, tmpMatrix);
vec3.subtract(tmpvec3, tmpvec3, tmpvec1);
vec3.normalize(tmpvec3, tmpvec3);
publicAPI.setViewUp(tmpvec3[0], tmpvec3[1], tmpvec3[2]);

publicAPI.setDistance(oldDist);
};

// the provided matrix should include
// translation and orientation only
// mat is physical to view
publicAPI.computeViewParametersFromPhysicalMatrix = (mat) => {
// get the WorldToPhysicalMatrix
publicAPI.getWorldToPhysicalMatrix(tmpMatrix);

// first convert the physical -> view matrix to be
// world -> view
mat4.multiply(tmpMatrix, mat, tmpMatrix);

publicAPI.computeViewParametersFromViewMatrix(tmpMatrix);
};

publicAPI.setViewMatrix = (mat) => {
model.viewMatrix = mat;
if (model.viewMatrix) {
mat4.copy(tmpMatrix, model.viewMatrix);
publicAPI.computeViewParametersFromViewMatrix(tmpMatrix);
mat4.transpose(model.viewMatrix, model.viewMatrix);
}
};

publicAPI.getViewMatrix = () => {
if (model.viewMatrix) {
return model.viewMatrix;
}

const result = mat4.create();

mat4.lookAt(
tmpMatrix,
vec3.fromValues(...model.position), // eye
vec3.fromValues(...model.focalPoint), // at
vec3.fromValues(...model.viewUp) // up
);

mat4.transpose(tmpMatrix, tmpMatrix);

mat4.copy(result, tmpMatrix);
return result;
};

publicAPI.setProjectionMatrix = (mat) => {
model.projectionMatrix = mat;
};

publicAPI.getProjectionMatrix = (aspect, nearz, farz) => {
const result = mat4.create();

if (model.projectionMatrix) {
const scale = 1 / model.physicalScale;
vec3.set(tmpvec1, scale, scale, scale);

mat4.copy(result, model.projectionMatrix);
mat4.scale(result, result, tmpvec1);
mat4.transpose(result, result);
return result;
}

mat4.identity(tmpMatrix);

// FIXME: Not sure what to do about adjust z buffer here
// adjust Z-buffer range
// this->ProjectionTransform->AdjustZBuffer( -1, +1, nearz, farz );
const cWidth = model.clippingRange[1] - model.clippingRange[0];
const cRange = [
model.clippingRange[0] + ((nearz + 1) * cWidth) / 2.0,
model.clippingRange[0] + ((farz + 1) * cWidth) / 2.0,
];

if (model.parallelProjection) {
// set up a rectangular parallelipiped
const width = model.parallelScale * aspect;
const height = model.parallelScale;

const xmin = (model.windowCenter[0] - 1.0) * width;
const xmax = (model.windowCenter[0] + 1.0) * width;
const ymin = (model.windowCenter[1] - 1.0) * height;
const ymax = (model.windowCenter[1] + 1.0) * height;

mat4.ortho(tmpMatrix, xmin, xmax, ymin, ymax, cRange[0], cRange[1]);
mat4.transpose(tmpMatrix, tmpMatrix);
} else if (model.useOffAxisProjection) {
throw new Error('Off-Axis projection is not supported at this time');
} else {
const tmp = Math.tan(vtkMath.radiansFromDegrees(model.viewAngle) / 2.0);
let width;
let height;
if (model.useHorizontalViewAngle === true) {
width = model.clippingRange[0] * tmp;
height = (model.clippingRange[0] * tmp) / aspect;
} else {
width = model.clippingRange[0] * tmp * aspect;
height = model.clippingRange[0] * tmp;
}

const xmin = (model.windowCenter[0] - 1.0) * width;
const xmax = (model.windowCenter[0] + 1.0) * width;
const ymin = (model.windowCenter[1] - 1.0) * height;
const ymax = (model.windowCenter[1] + 1.0) * height;
const znear = cRange[0];
const zfar = cRange[1];

tmpMatrix[0] = (2.0 * znear) / (xmax - xmin);
tmpMatrix[5] = (2.0 * znear) / (ymax - ymin);
tmpMatrix[2] = (xmin + xmax) / (xmax - xmin);
tmpMatrix[6] = (ymin + ymax) / (ymax - ymin);
tmpMatrix[10] = -(znear + zfar) / (zfar - znear);
tmpMatrix[14] = -1.0;
tmpMatrix[11] = (-2.0 * znear * zfar) / (zfar - znear);
tmpMatrix[15] = 0.0;
}

mat4.copy(result, tmpMatrix);

return result;
};

publicAPI.getCompositeProjectionMatrix = (aspect, nearz, farz) => {
const vMat = publicAPI.getViewMatrix();
const pMat = publicAPI.getProjectionMatrix(aspect, nearz, farz);
const result = mat4.create();
// mats are transposed so the order is A then B
mat4.multiply(result, vMat, pMat);
return result;
};

publicAPI.setDirectionOfProjection = (x, y, z) => {
if (
model.directionOfProjection[0] === x &&
model.directionOfProjection[1] === y &&
model.directionOfProjection[2] === z
) {
return;
}

model.directionOfProjection[0] = x;
model.directionOfProjection[1] = y;
model.directionOfProjection[2] = z;

const vec = model.directionOfProjection;

// recalculate FocalPoint
model.focalPoint[0] = model.position[0] + vec[0] * model.distance;
model.focalPoint[1] = model.position[1] + vec[1] * model.distance;
model.focalPoint[2] = model.position[2] + vec[2] * model.distance;
computeViewPlaneNormal();
};

// used to handle convert js device orientation angles
// when you use this method the camera will adjust to the
// device orientation such that the physicalViewUp you set
// in world coordinates looks up, and the physicalViewNorth
// you set in world coorindates will (maybe) point north
//
// NOTE WARNING - much of the documentaiton out there on how
// orientation works is seriously wrong. Even worse the Chrome
// device orientation simulator is completely wrong and should
// never be used. OMG it is so messed up.
//
// how it seems to work on iOS is that the device orientation
// is specified in extrinsic angles with a alpha, beta, gamma
// convention with axes of Z, X, Y (the code below substitutes
// the physical coordinate system for these axes to get the right
// modified coordinate system.
publicAPI.setDeviceAngles = (alpha, beta, gamma, screen) => {
const physVRight = [3];
vtkMath.cross(model.physicalViewNorth, model.physicalViewUp, physVRight);

const rotmat = mat4.create(); // phone to physical coordinates
mat4.rotate(
rotmat,
rotmat,
vtkMath.radiansFromDegrees(alpha),
model.physicalViewUp
);
mat4.rotate(rotmat, rotmat, vtkMath.radiansFromDegrees(beta), physVRight);
mat4.rotate(
rotmat,
rotmat,
vtkMath.radiansFromDegrees(gamma),
model.physicalViewNorth
);

mat4.rotate(
rotmat,
rotmat,
vtkMath.radiansFromDegrees(-screen),
model.physicalViewUp
);

const dop = vec3.fromValues(
-model.physicalViewUp[0],
-model.physicalViewUp[1],
-model.physicalViewUp[2]
);
const vup = vec3.fromValues(
model.physicalViewNorth[0],
model.physicalViewNorth[1],
model.physicalViewNorth[2]
);
vec3.transformMat4(dop, dop, rotmat);
vec3.transformMat4(vup, vup, rotmat);

publicAPI.setDirectionOfProjection(dop[0], dop[1], dop[2]);
publicAPI.setViewUp(vup[0], vup[1], vup[2]);
publicAPI.modified();
};

publicAPI.setOrientationWXYZ = (degrees, x, y, z) => {
const quatMat = mat4.create();

if (degrees !== 0.0 && (x !== 0.0 || y !== 0.0 || z !== 0.0)) {
// convert to radians
const angle = vtkMath.radiansFromDegrees(degrees);
const q = quat.create();
quat.setAxisAngle(q, [x, y, z], angle);
quat.toMat4(q, quatMat);
}

const dop = vec3.fromValues(0.0, 0.0, -1.0);
const newdop = vec3.create();
vec3.transformMat4(newdop, dop, quatMat);

const vup = vec3.fromValues(0.0, 1.0, 0.0);
const newvup = vec3.create();
vec3.transformMat4(newvup, vup, quatMat);

publicAPI.setDirectionOfProjection(...newdop);
publicAPI.setViewUp(...newvup);
publicAPI.modified();
};

publicAPI.computeClippingRange = (bounds) => {
let vn = null;
let position = null;

vn = model.viewPlaneNormal;
position = model.position;

const a = -vn[0];
const b = -vn[1];
const c = -vn[2];
const d = -(a * position[0] + b * position[1] + c * position[2]);

// Set the max near clipping plane and the min far clipping plane
const range = [a * bounds[0] + b * bounds[2] + c * bounds[4] + d, 1e-18];

// Find the closest / farthest bounding box vertex
for (let k = 0; k < 2; k++) {
for (let j = 0; j < 2; j++) {
for (let i = 0; i < 2; i++) {
const dist =
a * bounds[i] + b * bounds[2 + j] + c * bounds[4 + k] + d;
range[0] = dist < range[0] ? dist : range[0];
range[1] = dist > range[1] ? dist : range[1];
}
}
}

return range;
};
}

// ----------------------------------------------------------------------------
// Object factory
// ----------------------------------------------------------------------------

export const DEFAULT_VALUES = {
position: [0, 0, 1],
focalPoint: [0, 0, 0],
viewUp: [0, 1, 0],
directionOfProjection: [0, 0, -1],
parallelProjection: false,
useHorizontalViewAngle: false,
viewAngle: 30,
parallelScale: 1,
clippingRange: [0.01, 1000.01],
windowCenter: [0, 0],
viewPlaneNormal: [0, 0, 1],
useOffAxisProjection: false,
screenBottomLeft: [-0.5, -0.5, -0.5],
screenBottomRight: [0.5, -0.5, -0.5],
screenTopRight: [0.5, 0.5, -0.5],
freezeFocalPoint: false,
projectionMatrix: null,
viewMatrix: null,

// used for world to physical transformations
physicalTranslation: [0, 0, 0],
physicalScale: 1.0,
physicalViewUp: [0, 1, 0],
physicalViewNorth: [0, 0, -1],
};

// ----------------------------------------------------------------------------

export function extend(publicAPI, model, initialValues = {}) {
Object.assign(model, DEFAULT_VALUES, initialValues);

// Build VTK API
macro.obj(publicAPI, model);

macro.get(publicAPI, model, ['distance']);

macro.setGet(publicAPI, model, [
'parallelProjection',
'useHorizontalViewAngle',
'viewAngle',
'parallelScale',
'useOffAxisProjection',
'freezeFocalPoint',
'physicalScale',
]);

macro.getArray(publicAPI, model, [
'directionOfProjection',
'viewPlaneNormal',
'position',
'focalPoint',
]);

macro.setGetArray(publicAPI, model, ['clippingRange', 'windowCenter'], 2);

macro.setGetArray(
publicAPI,
model,
[
'viewUp',
'screenBottomLeft',
'screenBottomRight',
'screenTopRight',
'physicalTranslation',
'physicalViewUp',
'physicalViewNorth',
],
3
);

// Object methods
vtkCamera(publicAPI, model);
}

// ----------------------------------------------------------------------------

export const newInstance = macro.newInstance(extend, 'vtkCamera');

// ----------------------------------------------------------------------------

export default { newInstance, extend };